00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 89 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3267 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.036 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.037 The recommended git tool is: git 00:00:00.037 using credential 00000000-0000-0000-0000-000000000002 00:00:00.039 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.062 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.086 Using shallow fetch with depth 1 00:00:00.086 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.086 > git --version # timeout=10 00:00:00.111 > git --version # 'git version 2.39.2' 00:00:00.111 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.138 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.138 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.389 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.401 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.412 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.412 > git config core.sparsecheckout # timeout=10 00:00:03.421 > git read-tree -mu HEAD # timeout=10 00:00:03.436 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.451 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.451 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.529 [Pipeline] Start of Pipeline 00:00:03.545 [Pipeline] library 00:00:03.547 Loading library shm_lib@master 00:00:03.547 Library shm_lib@master is cached. Copying from home. 00:00:03.565 [Pipeline] node 00:00:03.573 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.575 [Pipeline] { 00:00:03.587 [Pipeline] catchError 00:00:03.589 [Pipeline] { 00:00:03.601 [Pipeline] wrap 00:00:03.611 [Pipeline] { 00:00:03.621 [Pipeline] stage 00:00:03.623 [Pipeline] { (Prologue) 00:00:03.806 [Pipeline] sh 00:00:04.084 + logger -p user.info -t JENKINS-CI 00:00:04.105 [Pipeline] echo 00:00:04.107 Node: GP11 00:00:04.115 [Pipeline] sh 00:00:04.418 [Pipeline] setCustomBuildProperty 00:00:04.427 [Pipeline] echo 00:00:04.428 Cleanup processes 00:00:04.432 [Pipeline] sh 00:00:04.713 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.713 1214512 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.726 [Pipeline] sh 00:00:05.006 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.006 ++ grep -v 'sudo pgrep' 00:00:05.006 ++ awk '{print $1}' 00:00:05.006 + sudo kill -9 00:00:05.006 + true 00:00:05.020 [Pipeline] cleanWs 00:00:05.028 [WS-CLEANUP] Deleting project workspace... 00:00:05.028 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.034 [WS-CLEANUP] done 00:00:05.037 [Pipeline] setCustomBuildProperty 00:00:05.048 [Pipeline] sh 00:00:05.331 + sudo git config --global --replace-all safe.directory '*' 00:00:05.460 [Pipeline] httpRequest 00:00:05.493 [Pipeline] echo 00:00:05.495 Sorcerer 10.211.164.101 is alive 00:00:05.500 [Pipeline] httpRequest 00:00:05.504 HttpMethod: GET 00:00:05.504 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.505 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.531 Response Code: HTTP/1.1 200 OK 00:00:05.531 Success: Status code 200 is in the accepted range: 200,404 00:00:05.531 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.519 [Pipeline] sh 00:00:09.803 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.820 [Pipeline] httpRequest 00:00:09.841 [Pipeline] echo 00:00:09.843 Sorcerer 10.211.164.101 is alive 00:00:09.853 [Pipeline] httpRequest 00:00:09.858 HttpMethod: GET 00:00:09.859 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.859 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.868 Response Code: HTTP/1.1 200 OK 00:00:09.868 Success: Status code 200 is in the accepted range: 200,404 00:00:09.869 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:12.930 [Pipeline] sh 00:01:13.216 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:16.510 [Pipeline] sh 00:01:16.791 + git -C spdk log --oneline -n5 00:01:16.792 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:16.792 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:16.792 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:16.792 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:16.792 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:16.809 [Pipeline] withCredentials 00:01:16.819 > git --version # timeout=10 00:01:16.830 > git --version # 'git version 2.39.2' 00:01:16.848 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:16.850 [Pipeline] { 00:01:16.858 [Pipeline] retry 00:01:16.860 [Pipeline] { 00:01:16.874 [Pipeline] sh 00:01:17.156 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:17.740 [Pipeline] } 00:01:17.756 [Pipeline] // retry 00:01:17.761 [Pipeline] } 00:01:17.781 [Pipeline] // withCredentials 00:01:17.791 [Pipeline] httpRequest 00:01:17.816 [Pipeline] echo 00:01:17.818 Sorcerer 10.211.164.101 is alive 00:01:17.825 [Pipeline] httpRequest 00:01:17.829 HttpMethod: GET 00:01:17.830 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:17.831 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:17.834 Response Code: HTTP/1.1 200 OK 00:01:17.834 Success: Status code 200 is in the accepted range: 200,404 00:01:17.835 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:23.831 [Pipeline] sh 00:01:24.142 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.530 [Pipeline] sh 00:01:25.811 + git -C dpdk log --oneline -n5 00:01:25.811 eeb0605f11 version: 23.11.0 00:01:25.811 238778122a doc: update release notes for 23.11 00:01:25.811 46aa6b3cfc doc: fix description of RSS features 00:01:25.811 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:25.811 7e421ae345 devtools: support skipping forbid rule check 00:01:25.821 [Pipeline] } 00:01:25.836 [Pipeline] // stage 00:01:25.843 [Pipeline] stage 00:01:25.845 [Pipeline] { (Prepare) 00:01:25.865 [Pipeline] writeFile 00:01:25.877 [Pipeline] sh 00:01:26.160 + logger -p user.info -t JENKINS-CI 00:01:26.172 [Pipeline] sh 00:01:26.457 + logger -p user.info -t JENKINS-CI 00:01:26.469 [Pipeline] sh 00:01:26.752 + cat autorun-spdk.conf 00:01:26.752 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.752 SPDK_TEST_NVMF=1 00:01:26.752 SPDK_TEST_NVME_CLI=1 00:01:26.752 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.752 SPDK_TEST_NVMF_NICS=e810 00:01:26.752 SPDK_TEST_VFIOUSER=1 00:01:26.752 SPDK_RUN_UBSAN=1 00:01:26.752 NET_TYPE=phy 00:01:26.752 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:26.752 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:26.760 RUN_NIGHTLY=1 00:01:26.764 [Pipeline] readFile 00:01:26.788 [Pipeline] withEnv 00:01:26.790 [Pipeline] { 00:01:26.803 [Pipeline] sh 00:01:27.083 + set -ex 00:01:27.083 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:27.083 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.083 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.083 ++ SPDK_TEST_NVMF=1 00:01:27.083 ++ SPDK_TEST_NVME_CLI=1 00:01:27.083 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.083 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.083 ++ SPDK_TEST_VFIOUSER=1 00:01:27.083 ++ SPDK_RUN_UBSAN=1 00:01:27.083 ++ NET_TYPE=phy 00:01:27.083 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:27.083 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.083 ++ RUN_NIGHTLY=1 00:01:27.083 + case $SPDK_TEST_NVMF_NICS in 00:01:27.083 + DRIVERS=ice 00:01:27.083 + [[ tcp == \r\d\m\a ]] 00:01:27.083 + [[ -n ice ]] 00:01:27.083 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:27.083 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:27.083 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:27.083 rmmod: ERROR: Module irdma is not currently loaded 00:01:27.083 rmmod: ERROR: Module i40iw is not currently loaded 00:01:27.083 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:27.083 + true 00:01:27.083 + for D in $DRIVERS 00:01:27.083 + sudo modprobe ice 00:01:27.083 + exit 0 00:01:27.092 [Pipeline] } 00:01:27.109 [Pipeline] // withEnv 00:01:27.114 [Pipeline] } 00:01:27.130 [Pipeline] // stage 00:01:27.140 [Pipeline] catchError 00:01:27.141 [Pipeline] { 00:01:27.156 [Pipeline] timeout 00:01:27.156 Timeout set to expire in 50 min 00:01:27.158 [Pipeline] { 00:01:27.173 [Pipeline] stage 00:01:27.175 [Pipeline] { (Tests) 00:01:27.190 [Pipeline] sh 00:01:27.475 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.475 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.475 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.475 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:27.475 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.475 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.475 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:27.475 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.475 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.475 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.475 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:27.475 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.475 + source /etc/os-release 00:01:27.475 ++ NAME='Fedora Linux' 00:01:27.475 ++ VERSION='38 (Cloud Edition)' 00:01:27.475 ++ ID=fedora 00:01:27.475 ++ VERSION_ID=38 00:01:27.475 ++ VERSION_CODENAME= 00:01:27.475 ++ PLATFORM_ID=platform:f38 00:01:27.475 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:27.475 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:27.475 ++ LOGO=fedora-logo-icon 00:01:27.475 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:27.475 ++ HOME_URL=https://fedoraproject.org/ 00:01:27.475 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:27.475 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:27.475 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:27.475 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:27.475 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:27.475 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:27.475 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:27.475 ++ SUPPORT_END=2024-05-14 00:01:27.475 ++ VARIANT='Cloud Edition' 00:01:27.475 ++ VARIANT_ID=cloud 00:01:27.475 + uname -a 00:01:27.475 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:27.475 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:28.412 Hugepages 00:01:28.412 node hugesize free / total 00:01:28.412 node0 1048576kB 0 / 0 00:01:28.412 node0 2048kB 0 / 0 00:01:28.412 node1 1048576kB 0 / 0 00:01:28.412 node1 2048kB 0 / 0 00:01:28.412 00:01:28.412 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.412 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:28.412 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:28.412 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:28.412 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:28.412 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:28.412 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:28.412 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:28.412 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:28.412 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:28.412 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:28.412 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:28.412 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:28.412 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:28.412 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:28.412 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:28.412 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:28.412 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:28.412 + rm -f /tmp/spdk-ld-path 00:01:28.412 + source autorun-spdk.conf 00:01:28.412 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.412 ++ SPDK_TEST_NVMF=1 00:01:28.412 ++ SPDK_TEST_NVME_CLI=1 00:01:28.412 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.412 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.412 ++ SPDK_TEST_VFIOUSER=1 00:01:28.412 ++ SPDK_RUN_UBSAN=1 00:01:28.412 ++ NET_TYPE=phy 00:01:28.412 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:28.412 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.412 ++ RUN_NIGHTLY=1 00:01:28.412 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.412 + [[ -n '' ]] 00:01:28.412 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.671 + for M in /var/spdk/build-*-manifest.txt 00:01:28.671 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.671 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.671 + for M in /var/spdk/build-*-manifest.txt 00:01:28.671 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.671 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.671 ++ uname 00:01:28.671 + [[ Linux == \L\i\n\u\x ]] 00:01:28.671 + sudo dmesg -T 00:01:28.671 + sudo dmesg --clear 00:01:28.671 + dmesg_pid=1215835 00:01:28.671 + [[ Fedora Linux == FreeBSD ]] 00:01:28.671 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.671 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.671 + sudo dmesg -Tw 00:01:28.671 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.671 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.671 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.671 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.671 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.671 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.671 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.671 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.671 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.671 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.671 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.671 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.671 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.671 Test configuration: 00:01:28.671 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.671 SPDK_TEST_NVMF=1 00:01:28.671 SPDK_TEST_NVME_CLI=1 00:01:28.671 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.671 SPDK_TEST_NVMF_NICS=e810 00:01:28.671 SPDK_TEST_VFIOUSER=1 00:01:28.671 SPDK_RUN_UBSAN=1 00:01:28.671 NET_TYPE=phy 00:01:28.671 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:28.671 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.671 RUN_NIGHTLY=1 13:37:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:28.671 13:37:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.671 13:37:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.671 13:37:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.671 13:37:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.671 13:37:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.671 13:37:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.671 13:37:06 -- paths/export.sh@5 -- $ export PATH 00:01:28.671 13:37:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.671 13:37:06 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:28.671 13:37:06 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:28.671 13:37:06 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720957026.XXXXXX 00:01:28.671 13:37:06 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720957026.IzAmFf 00:01:28.671 13:37:06 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:28.671 13:37:06 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:01:28.671 13:37:06 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.671 13:37:06 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:28.671 13:37:06 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.671 13:37:06 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.671 13:37:06 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:28.671 13:37:06 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:28.671 13:37:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.671 13:37:06 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:28.671 13:37:06 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:28.671 13:37:06 -- pm/common@17 -- $ local monitor 00:01:28.671 13:37:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.671 13:37:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.671 13:37:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.671 13:37:06 -- pm/common@21 -- $ date +%s 00:01:28.671 13:37:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.671 13:37:06 -- pm/common@21 -- $ date +%s 00:01:28.671 13:37:06 -- pm/common@25 -- $ sleep 1 00:01:28.671 13:37:06 -- pm/common@21 -- $ date +%s 00:01:28.671 13:37:06 -- pm/common@21 -- $ date +%s 00:01:28.671 13:37:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720957026 00:01:28.671 13:37:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720957026 00:01:28.671 13:37:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720957026 00:01:28.671 13:37:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720957026 00:01:28.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720957026_collect-vmstat.pm.log 00:01:28.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720957026_collect-cpu-load.pm.log 00:01:28.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720957026_collect-cpu-temp.pm.log 00:01:28.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720957026_collect-bmc-pm.bmc.pm.log 00:01:29.606 13:37:07 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:29.606 13:37:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.606 13:37:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.606 13:37:07 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.606 13:37:07 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.606 Sun Jul 14 11:37:07 AM UTC 2024 00:01:29.606 13:37:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.606 v24.05-13-g5fa2f5086 00:01:29.606 13:37:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:29.606 13:37:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.606 13:37:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.606 13:37:07 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:29.606 13:37:07 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:29.606 13:37:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.606 ************************************ 00:01:29.606 START TEST ubsan 00:01:29.606 ************************************ 00:01:29.606 13:37:07 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:29.606 using ubsan 00:01:29.606 00:01:29.606 real 0m0.000s 00:01:29.606 user 0m0.000s 00:01:29.606 sys 0m0.000s 00:01:29.606 13:37:07 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:29.606 13:37:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.606 ************************************ 00:01:29.606 END TEST ubsan 00:01:29.606 ************************************ 00:01:29.864 13:37:07 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:29.864 13:37:07 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:29.864 13:37:07 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:29.864 13:37:07 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:29.864 13:37:07 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:29.864 13:37:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.864 ************************************ 00:01:29.864 START TEST build_native_dpdk 00:01:29.864 ************************************ 00:01:29.864 13:37:07 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:29.864 eeb0605f11 version: 23.11.0 00:01:29.864 238778122a doc: update release notes for 23.11 00:01:29.864 46aa6b3cfc doc: fix description of RSS features 00:01:29.864 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:29.864 7e421ae345 devtools: support skipping forbid rule check 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:29.864 13:37:07 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:29.865 13:37:07 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:29.865 13:37:07 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:29.865 patching file config/rte_config.h 00:01:29.865 Hunk #1 succeeded at 60 (offset 1 line). 00:01:29.865 13:37:07 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:29.865 13:37:07 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:29.865 13:37:07 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:29.865 13:37:07 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:29.865 13:37:07 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:34.059 The Meson build system 00:01:34.059 Version: 1.3.1 00:01:34.059 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:34.059 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:34.059 Build type: native build 00:01:34.059 Program cat found: YES (/usr/bin/cat) 00:01:34.059 Project name: DPDK 00:01:34.059 Project version: 23.11.0 00:01:34.059 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:34.059 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:34.059 Host machine cpu family: x86_64 00:01:34.059 Host machine cpu: x86_64 00:01:34.059 Message: ## Building in Developer Mode ## 00:01:34.059 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:34.059 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:34.059 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:34.059 Program python3 found: YES (/usr/bin/python3) 00:01:34.059 Program cat found: YES (/usr/bin/cat) 00:01:34.059 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:34.059 Compiler for C supports arguments -march=native: YES 00:01:34.059 Checking for size of "void *" : 8 00:01:34.059 Checking for size of "void *" : 8 (cached) 00:01:34.059 Library m found: YES 00:01:34.059 Library numa found: YES 00:01:34.059 Has header "numaif.h" : YES 00:01:34.059 Library fdt found: NO 00:01:34.059 Library execinfo found: NO 00:01:34.059 Has header "execinfo.h" : YES 00:01:34.059 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:34.059 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:34.059 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:34.059 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:34.059 Run-time dependency openssl found: YES 3.0.9 00:01:34.059 Run-time dependency libpcap found: YES 1.10.4 00:01:34.059 Has header "pcap.h" with dependency libpcap: YES 00:01:34.059 Compiler for C supports arguments -Wcast-qual: YES 00:01:34.059 Compiler for C supports arguments -Wdeprecated: YES 00:01:34.059 Compiler for C supports arguments -Wformat: YES 00:01:34.059 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:34.059 Compiler for C supports arguments -Wformat-security: NO 00:01:34.059 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:34.059 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:34.059 Compiler for C supports arguments -Wnested-externs: YES 00:01:34.059 Compiler for C supports arguments -Wold-style-definition: YES 00:01:34.059 Compiler for C supports arguments -Wpointer-arith: YES 00:01:34.059 Compiler for C supports arguments -Wsign-compare: YES 00:01:34.059 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:34.059 Compiler for C supports arguments -Wundef: YES 00:01:34.059 Compiler for C supports arguments -Wwrite-strings: YES 00:01:34.059 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:34.059 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:34.059 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:34.059 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:34.059 Program objdump found: YES (/usr/bin/objdump) 00:01:34.059 Compiler for C supports arguments -mavx512f: YES 00:01:34.059 Checking if "AVX512 checking" compiles: YES 00:01:34.059 Fetching value of define "__SSE4_2__" : 1 00:01:34.059 Fetching value of define "__AES__" : 1 00:01:34.059 Fetching value of define "__AVX__" : 1 00:01:34.059 Fetching value of define "__AVX2__" : (undefined) 00:01:34.059 Fetching value of define "__AVX512BW__" : (undefined) 00:01:34.059 Fetching value of define "__AVX512CD__" : (undefined) 00:01:34.059 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:34.060 Fetching value of define "__AVX512F__" : (undefined) 00:01:34.060 Fetching value of define "__AVX512VL__" : (undefined) 00:01:34.060 Fetching value of define "__PCLMUL__" : 1 00:01:34.060 Fetching value of define "__RDRND__" : 1 00:01:34.060 Fetching value of define "__RDSEED__" : (undefined) 00:01:34.060 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:34.060 Fetching value of define "__znver1__" : (undefined) 00:01:34.060 Fetching value of define "__znver2__" : (undefined) 00:01:34.060 Fetching value of define "__znver3__" : (undefined) 00:01:34.060 Fetching value of define "__znver4__" : (undefined) 00:01:34.060 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:34.060 Message: lib/log: Defining dependency "log" 00:01:34.060 Message: lib/kvargs: Defining dependency "kvargs" 00:01:34.060 Message: lib/telemetry: Defining dependency "telemetry" 00:01:34.060 Checking for function "getentropy" : NO 00:01:34.060 Message: lib/eal: Defining dependency "eal" 00:01:34.060 Message: lib/ring: Defining dependency "ring" 00:01:34.060 Message: lib/rcu: Defining dependency "rcu" 00:01:34.060 Message: lib/mempool: Defining dependency "mempool" 00:01:34.060 Message: lib/mbuf: Defining dependency "mbuf" 00:01:34.060 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:34.060 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:34.060 Compiler for C supports arguments -mpclmul: YES 00:01:34.060 Compiler for C supports arguments -maes: YES 00:01:34.060 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:34.060 Compiler for C supports arguments -mavx512bw: YES 00:01:34.060 Compiler for C supports arguments -mavx512dq: YES 00:01:34.060 Compiler for C supports arguments -mavx512vl: YES 00:01:34.060 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:34.060 Compiler for C supports arguments -mavx2: YES 00:01:34.060 Compiler for C supports arguments -mavx: YES 00:01:34.060 Message: lib/net: Defining dependency "net" 00:01:34.060 Message: lib/meter: Defining dependency "meter" 00:01:34.060 Message: lib/ethdev: Defining dependency "ethdev" 00:01:34.060 Message: lib/pci: Defining dependency "pci" 00:01:34.060 Message: lib/cmdline: Defining dependency "cmdline" 00:01:34.060 Message: lib/metrics: Defining dependency "metrics" 00:01:34.060 Message: lib/hash: Defining dependency "hash" 00:01:34.060 Message: lib/timer: Defining dependency "timer" 00:01:34.060 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:34.060 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:34.060 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:34.060 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:34.060 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:34.060 Message: lib/acl: Defining dependency "acl" 00:01:34.060 Message: lib/bbdev: Defining dependency "bbdev" 00:01:34.060 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:34.060 Run-time dependency libelf found: YES 0.190 00:01:34.060 Message: lib/bpf: Defining dependency "bpf" 00:01:34.060 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:34.060 Message: lib/compressdev: Defining dependency "compressdev" 00:01:34.060 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:34.060 Message: lib/distributor: Defining dependency "distributor" 00:01:34.060 Message: lib/dmadev: Defining dependency "dmadev" 00:01:34.060 Message: lib/efd: Defining dependency "efd" 00:01:34.060 Message: lib/eventdev: Defining dependency "eventdev" 00:01:34.060 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:34.060 Message: lib/gpudev: Defining dependency "gpudev" 00:01:34.060 Message: lib/gro: Defining dependency "gro" 00:01:34.060 Message: lib/gso: Defining dependency "gso" 00:01:34.060 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:34.060 Message: lib/jobstats: Defining dependency "jobstats" 00:01:34.060 Message: lib/latencystats: Defining dependency "latencystats" 00:01:34.060 Message: lib/lpm: Defining dependency "lpm" 00:01:34.060 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:34.060 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:34.060 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:34.060 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:34.060 Message: lib/member: Defining dependency "member" 00:01:34.060 Message: lib/pcapng: Defining dependency "pcapng" 00:01:34.060 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:34.060 Message: lib/power: Defining dependency "power" 00:01:34.060 Message: lib/rawdev: Defining dependency "rawdev" 00:01:34.060 Message: lib/regexdev: Defining dependency "regexdev" 00:01:34.060 Message: lib/mldev: Defining dependency "mldev" 00:01:34.060 Message: lib/rib: Defining dependency "rib" 00:01:34.060 Message: lib/reorder: Defining dependency "reorder" 00:01:34.060 Message: lib/sched: Defining dependency "sched" 00:01:34.060 Message: lib/security: Defining dependency "security" 00:01:34.060 Message: lib/stack: Defining dependency "stack" 00:01:34.060 Has header "linux/userfaultfd.h" : YES 00:01:34.060 Has header "linux/vduse.h" : YES 00:01:34.060 Message: lib/vhost: Defining dependency "vhost" 00:01:34.060 Message: lib/ipsec: Defining dependency "ipsec" 00:01:34.060 Message: lib/pdcp: Defining dependency "pdcp" 00:01:34.060 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:34.060 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:34.060 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:34.060 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:34.060 Message: lib/fib: Defining dependency "fib" 00:01:34.060 Message: lib/port: Defining dependency "port" 00:01:34.060 Message: lib/pdump: Defining dependency "pdump" 00:01:34.060 Message: lib/table: Defining dependency "table" 00:01:34.060 Message: lib/pipeline: Defining dependency "pipeline" 00:01:34.060 Message: lib/graph: Defining dependency "graph" 00:01:34.060 Message: lib/node: Defining dependency "node" 00:01:35.454 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:35.454 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:35.454 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:35.454 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:35.454 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:35.454 Compiler for C supports arguments -Wno-unused-value: YES 00:01:35.454 Compiler for C supports arguments -Wno-format: YES 00:01:35.454 Compiler for C supports arguments -Wno-format-security: YES 00:01:35.454 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:35.454 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:35.454 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:35.454 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:35.454 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:35.454 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:35.454 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:35.454 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:35.454 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:35.454 Has header "sys/epoll.h" : YES 00:01:35.454 Program doxygen found: YES (/usr/bin/doxygen) 00:01:35.454 Configuring doxy-api-html.conf using configuration 00:01:35.454 Configuring doxy-api-man.conf using configuration 00:01:35.454 Program mandb found: YES (/usr/bin/mandb) 00:01:35.454 Program sphinx-build found: NO 00:01:35.454 Configuring rte_build_config.h using configuration 00:01:35.454 Message: 00:01:35.454 ================= 00:01:35.454 Applications Enabled 00:01:35.454 ================= 00:01:35.454 00:01:35.454 apps: 00:01:35.454 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:35.454 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:35.454 test-pmd, test-regex, test-sad, test-security-perf, 00:01:35.454 00:01:35.454 Message: 00:01:35.454 ================= 00:01:35.454 Libraries Enabled 00:01:35.454 ================= 00:01:35.454 00:01:35.454 libs: 00:01:35.454 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:35.454 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:35.454 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:35.454 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:35.454 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:35.454 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:35.454 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:35.454 00:01:35.454 00:01:35.454 Message: 00:01:35.454 =============== 00:01:35.454 Drivers Enabled 00:01:35.454 =============== 00:01:35.454 00:01:35.454 common: 00:01:35.454 00:01:35.454 bus: 00:01:35.454 pci, vdev, 00:01:35.454 mempool: 00:01:35.454 ring, 00:01:35.454 dma: 00:01:35.454 00:01:35.454 net: 00:01:35.454 i40e, 00:01:35.454 raw: 00:01:35.454 00:01:35.454 crypto: 00:01:35.454 00:01:35.454 compress: 00:01:35.454 00:01:35.454 regex: 00:01:35.454 00:01:35.454 ml: 00:01:35.454 00:01:35.454 vdpa: 00:01:35.454 00:01:35.454 event: 00:01:35.454 00:01:35.454 baseband: 00:01:35.454 00:01:35.454 gpu: 00:01:35.454 00:01:35.454 00:01:35.454 Message: 00:01:35.454 ================= 00:01:35.454 Content Skipped 00:01:35.454 ================= 00:01:35.454 00:01:35.454 apps: 00:01:35.454 00:01:35.454 libs: 00:01:35.454 00:01:35.454 drivers: 00:01:35.454 common/cpt: not in enabled drivers build config 00:01:35.454 common/dpaax: not in enabled drivers build config 00:01:35.454 common/iavf: not in enabled drivers build config 00:01:35.454 common/idpf: not in enabled drivers build config 00:01:35.454 common/mvep: not in enabled drivers build config 00:01:35.454 common/octeontx: not in enabled drivers build config 00:01:35.454 bus/auxiliary: not in enabled drivers build config 00:01:35.454 bus/cdx: not in enabled drivers build config 00:01:35.454 bus/dpaa: not in enabled drivers build config 00:01:35.454 bus/fslmc: not in enabled drivers build config 00:01:35.454 bus/ifpga: not in enabled drivers build config 00:01:35.454 bus/platform: not in enabled drivers build config 00:01:35.454 bus/vmbus: not in enabled drivers build config 00:01:35.454 common/cnxk: not in enabled drivers build config 00:01:35.454 common/mlx5: not in enabled drivers build config 00:01:35.454 common/nfp: not in enabled drivers build config 00:01:35.454 common/qat: not in enabled drivers build config 00:01:35.454 common/sfc_efx: not in enabled drivers build config 00:01:35.454 mempool/bucket: not in enabled drivers build config 00:01:35.454 mempool/cnxk: not in enabled drivers build config 00:01:35.454 mempool/dpaa: not in enabled drivers build config 00:01:35.454 mempool/dpaa2: not in enabled drivers build config 00:01:35.454 mempool/octeontx: not in enabled drivers build config 00:01:35.454 mempool/stack: not in enabled drivers build config 00:01:35.454 dma/cnxk: not in enabled drivers build config 00:01:35.454 dma/dpaa: not in enabled drivers build config 00:01:35.454 dma/dpaa2: not in enabled drivers build config 00:01:35.454 dma/hisilicon: not in enabled drivers build config 00:01:35.454 dma/idxd: not in enabled drivers build config 00:01:35.454 dma/ioat: not in enabled drivers build config 00:01:35.454 dma/skeleton: not in enabled drivers build config 00:01:35.454 net/af_packet: not in enabled drivers build config 00:01:35.454 net/af_xdp: not in enabled drivers build config 00:01:35.454 net/ark: not in enabled drivers build config 00:01:35.454 net/atlantic: not in enabled drivers build config 00:01:35.454 net/avp: not in enabled drivers build config 00:01:35.454 net/axgbe: not in enabled drivers build config 00:01:35.454 net/bnx2x: not in enabled drivers build config 00:01:35.454 net/bnxt: not in enabled drivers build config 00:01:35.454 net/bonding: not in enabled drivers build config 00:01:35.454 net/cnxk: not in enabled drivers build config 00:01:35.454 net/cpfl: not in enabled drivers build config 00:01:35.454 net/cxgbe: not in enabled drivers build config 00:01:35.454 net/dpaa: not in enabled drivers build config 00:01:35.454 net/dpaa2: not in enabled drivers build config 00:01:35.454 net/e1000: not in enabled drivers build config 00:01:35.454 net/ena: not in enabled drivers build config 00:01:35.454 net/enetc: not in enabled drivers build config 00:01:35.454 net/enetfec: not in enabled drivers build config 00:01:35.454 net/enic: not in enabled drivers build config 00:01:35.454 net/failsafe: not in enabled drivers build config 00:01:35.454 net/fm10k: not in enabled drivers build config 00:01:35.454 net/gve: not in enabled drivers build config 00:01:35.454 net/hinic: not in enabled drivers build config 00:01:35.454 net/hns3: not in enabled drivers build config 00:01:35.454 net/iavf: not in enabled drivers build config 00:01:35.454 net/ice: not in enabled drivers build config 00:01:35.455 net/idpf: not in enabled drivers build config 00:01:35.455 net/igc: not in enabled drivers build config 00:01:35.455 net/ionic: not in enabled drivers build config 00:01:35.455 net/ipn3ke: not in enabled drivers build config 00:01:35.455 net/ixgbe: not in enabled drivers build config 00:01:35.455 net/mana: not in enabled drivers build config 00:01:35.455 net/memif: not in enabled drivers build config 00:01:35.455 net/mlx4: not in enabled drivers build config 00:01:35.455 net/mlx5: not in enabled drivers build config 00:01:35.455 net/mvneta: not in enabled drivers build config 00:01:35.455 net/mvpp2: not in enabled drivers build config 00:01:35.455 net/netvsc: not in enabled drivers build config 00:01:35.455 net/nfb: not in enabled drivers build config 00:01:35.455 net/nfp: not in enabled drivers build config 00:01:35.455 net/ngbe: not in enabled drivers build config 00:01:35.455 net/null: not in enabled drivers build config 00:01:35.455 net/octeontx: not in enabled drivers build config 00:01:35.455 net/octeon_ep: not in enabled drivers build config 00:01:35.455 net/pcap: not in enabled drivers build config 00:01:35.455 net/pfe: not in enabled drivers build config 00:01:35.455 net/qede: not in enabled drivers build config 00:01:35.455 net/ring: not in enabled drivers build config 00:01:35.455 net/sfc: not in enabled drivers build config 00:01:35.455 net/softnic: not in enabled drivers build config 00:01:35.455 net/tap: not in enabled drivers build config 00:01:35.455 net/thunderx: not in enabled drivers build config 00:01:35.455 net/txgbe: not in enabled drivers build config 00:01:35.455 net/vdev_netvsc: not in enabled drivers build config 00:01:35.455 net/vhost: not in enabled drivers build config 00:01:35.455 net/virtio: not in enabled drivers build config 00:01:35.455 net/vmxnet3: not in enabled drivers build config 00:01:35.455 raw/cnxk_bphy: not in enabled drivers build config 00:01:35.455 raw/cnxk_gpio: not in enabled drivers build config 00:01:35.455 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:35.455 raw/ifpga: not in enabled drivers build config 00:01:35.455 raw/ntb: not in enabled drivers build config 00:01:35.455 raw/skeleton: not in enabled drivers build config 00:01:35.455 crypto/armv8: not in enabled drivers build config 00:01:35.455 crypto/bcmfs: not in enabled drivers build config 00:01:35.455 crypto/caam_jr: not in enabled drivers build config 00:01:35.455 crypto/ccp: not in enabled drivers build config 00:01:35.455 crypto/cnxk: not in enabled drivers build config 00:01:35.455 crypto/dpaa_sec: not in enabled drivers build config 00:01:35.455 crypto/dpaa2_sec: not in enabled drivers build config 00:01:35.455 crypto/ipsec_mb: not in enabled drivers build config 00:01:35.455 crypto/mlx5: not in enabled drivers build config 00:01:35.455 crypto/mvsam: not in enabled drivers build config 00:01:35.455 crypto/nitrox: not in enabled drivers build config 00:01:35.455 crypto/null: not in enabled drivers build config 00:01:35.455 crypto/octeontx: not in enabled drivers build config 00:01:35.455 crypto/openssl: not in enabled drivers build config 00:01:35.455 crypto/scheduler: not in enabled drivers build config 00:01:35.455 crypto/uadk: not in enabled drivers build config 00:01:35.455 crypto/virtio: not in enabled drivers build config 00:01:35.455 compress/isal: not in enabled drivers build config 00:01:35.455 compress/mlx5: not in enabled drivers build config 00:01:35.455 compress/octeontx: not in enabled drivers build config 00:01:35.455 compress/zlib: not in enabled drivers build config 00:01:35.455 regex/mlx5: not in enabled drivers build config 00:01:35.455 regex/cn9k: not in enabled drivers build config 00:01:35.455 ml/cnxk: not in enabled drivers build config 00:01:35.455 vdpa/ifc: not in enabled drivers build config 00:01:35.455 vdpa/mlx5: not in enabled drivers build config 00:01:35.455 vdpa/nfp: not in enabled drivers build config 00:01:35.455 vdpa/sfc: not in enabled drivers build config 00:01:35.455 event/cnxk: not in enabled drivers build config 00:01:35.455 event/dlb2: not in enabled drivers build config 00:01:35.455 event/dpaa: not in enabled drivers build config 00:01:35.455 event/dpaa2: not in enabled drivers build config 00:01:35.455 event/dsw: not in enabled drivers build config 00:01:35.455 event/opdl: not in enabled drivers build config 00:01:35.455 event/skeleton: not in enabled drivers build config 00:01:35.455 event/sw: not in enabled drivers build config 00:01:35.455 event/octeontx: not in enabled drivers build config 00:01:35.455 baseband/acc: not in enabled drivers build config 00:01:35.455 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:35.455 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:35.455 baseband/la12xx: not in enabled drivers build config 00:01:35.455 baseband/null: not in enabled drivers build config 00:01:35.455 baseband/turbo_sw: not in enabled drivers build config 00:01:35.455 gpu/cuda: not in enabled drivers build config 00:01:35.455 00:01:35.455 00:01:35.455 Build targets in project: 220 00:01:35.455 00:01:35.455 DPDK 23.11.0 00:01:35.455 00:01:35.455 User defined options 00:01:35.455 libdir : lib 00:01:35.455 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.455 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:35.455 c_link_args : 00:01:35.455 enable_docs : false 00:01:35.455 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:35.455 enable_kmods : false 00:01:35.455 machine : native 00:01:35.455 tests : false 00:01:35.455 00:01:35.455 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.455 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:35.455 13:37:13 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:35.455 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:35.455 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:35.455 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:35.455 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:35.455 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:35.455 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:35.455 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:35.455 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:35.455 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:35.455 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:35.455 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:35.455 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:35.455 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:35.455 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:35.455 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:35.455 [15/710] Linking static target lib/librte_kvargs.a 00:01:35.720 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:35.720 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:35.720 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:35.720 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:35.720 [20/710] Linking static target lib/librte_log.a 00:01:35.720 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.979 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.550 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:36.550 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:36.550 [25/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.550 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:36.550 [27/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:36.550 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:36.550 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:36.550 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:36.550 [31/710] Linking target lib/librte_log.so.24.0 00:01:36.550 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:36.550 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:36.550 [34/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:36.550 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:36.550 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:36.550 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:36.550 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:36.550 [39/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:36.550 [40/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:36.550 [41/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:36.550 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:36.550 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:36.550 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:36.550 [45/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:36.550 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:36.550 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:36.811 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:36.811 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:36.811 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:36.811 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:36.811 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:36.811 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:36.811 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:36.811 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:36.811 [56/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:36.811 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:36.811 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:36.811 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:36.811 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:36.811 [61/710] Linking target lib/librte_kvargs.so.24.0 00:01:36.811 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:36.811 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.069 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:37.069 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:37.069 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:37.069 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:37.343 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:37.343 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:37.343 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:37.343 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:37.343 [72/710] Linking static target lib/librte_pci.a 00:01:37.343 [73/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:37.343 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:37.343 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:37.606 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:37.606 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:37.606 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:37.606 [79/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.606 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:37.606 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:37.606 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:37.606 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:37.606 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:37.606 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:37.606 [86/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:37.870 [87/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:37.870 [88/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:37.870 [89/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:37.870 [90/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:37.870 [91/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:37.870 [92/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:37.870 [93/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:37.870 [94/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:37.870 [95/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:37.870 [96/710] Linking static target lib/librte_ring.a 00:01:37.870 [97/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:37.870 [98/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:37.870 [99/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:37.870 [100/710] Linking static target lib/librte_meter.a 00:01:37.870 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:37.870 [102/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:37.870 [103/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.870 [104/710] Linking static target lib/librte_telemetry.a 00:01:38.132 [105/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:38.132 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:38.132 [107/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.132 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:38.132 [109/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:38.132 [110/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:38.132 [111/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:38.132 [112/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:38.132 [113/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.394 [114/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:38.394 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:38.394 [116/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.394 [117/710] Linking static target lib/librte_eal.a 00:01:38.394 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:38.394 [119/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:38.394 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:38.394 [121/710] Linking static target lib/librte_net.a 00:01:38.394 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:38.394 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:38.394 [124/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:38.394 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:38.658 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:38.658 [127/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:38.658 [128/710] Linking static target lib/librte_cmdline.a 00:01:38.658 [129/710] Linking static target lib/librte_mempool.a 00:01:38.658 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.658 [131/710] Linking target lib/librte_telemetry.so.24.0 00:01:38.920 [132/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.920 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:38.920 [134/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:38.920 [135/710] Linking static target lib/librte_cfgfile.a 00:01:38.920 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:38.920 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:38.920 [138/710] Linking static target lib/librte_metrics.a 00:01:38.920 [139/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.920 [140/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:38.920 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:38.920 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:38.920 [143/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:39.182 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:39.182 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:39.182 [146/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.182 [147/710] Linking static target lib/librte_rcu.a 00:01:39.182 [148/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:39.182 [149/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:39.182 [150/710] Linking static target lib/librte_bitratestats.a 00:01:39.182 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:39.449 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:39.449 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.449 [154/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:39.449 [155/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.449 [156/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.449 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:39.449 [158/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.449 [159/710] Linking static target lib/librte_timer.a 00:01:39.449 [160/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.449 [161/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:39.449 [162/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:39.709 [163/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.709 [164/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.709 [165/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:39.709 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:39.971 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:39.971 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:39.971 [169/710] Linking static target lib/librte_bbdev.a 00:01:39.971 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:39.971 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.971 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:39.971 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.237 [174/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.237 [175/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.237 [176/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:40.237 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:40.237 [178/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:40.237 [179/710] Linking static target lib/librte_compressdev.a 00:01:40.237 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:40.237 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:40.497 [182/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:40.497 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:40.497 [184/710] Linking static target lib/librte_distributor.a 00:01:40.497 [185/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:40.761 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:40.761 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.761 [188/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.761 [189/710] Linking static target lib/librte_dmadev.a 00:01:40.761 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:41.020 [191/710] Linking static target lib/librte_bpf.a 00:01:41.020 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:41.020 [193/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:41.020 [194/710] Linking static target lib/librte_dispatcher.a 00:01:41.020 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:41.020 [196/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.020 [197/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.020 [198/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:41.020 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:41.020 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:41.283 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:41.283 [202/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:41.283 [203/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:41.283 [204/710] Linking static target lib/librte_gpudev.a 00:01:41.283 [205/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:41.283 [206/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:41.283 [207/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:41.283 [208/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:41.283 [209/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:41.283 [210/710] Linking static target lib/librte_gro.a 00:01:41.283 [211/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:41.283 [212/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:41.283 [213/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.283 [214/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:41.549 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:41.549 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.549 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:41.549 [218/710] Linking static target lib/librte_jobstats.a 00:01:41.549 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:41.549 [220/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.549 [221/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:41.813 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.813 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:41.813 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:41.813 [225/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:41.813 [226/710] Linking static target lib/librte_latencystats.a 00:01:42.078 [227/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.078 [228/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:42.078 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:42.078 [230/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:42.078 [231/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:42.078 [232/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:42.339 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:42.340 [234/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:42.340 [235/710] Linking static target lib/librte_ip_frag.a 00:01:42.340 [236/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:42.340 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:42.340 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.340 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:42.340 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:42.340 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:42.601 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:42.601 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:42.602 [244/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.602 [245/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:42.602 [246/710] Linking static target lib/librte_gso.a 00:01:42.602 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:42.602 [248/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.865 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:42.865 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:42.865 [251/710] Linking static target lib/librte_regexdev.a 00:01:42.865 [252/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:42.865 [253/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:43.130 [254/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:43.130 [255/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:43.130 [256/710] Linking static target lib/librte_rawdev.a 00:01:43.130 [257/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.130 [258/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:43.130 [259/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:43.130 [260/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:43.130 [261/710] Linking static target lib/librte_efd.a 00:01:43.130 [262/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:43.130 [263/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:43.130 [264/710] Linking static target lib/librte_mldev.a 00:01:43.130 [265/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:43.394 [266/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:43.394 [267/710] Linking static target lib/librte_pcapng.a 00:01:43.394 [268/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:43.394 [269/710] Linking static target lib/acl/libavx2_tmp.a 00:01:43.394 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:43.394 [271/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:43.394 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:43.394 [273/710] Linking static target lib/librte_stack.a 00:01:43.394 [274/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:43.655 [275/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:43.655 [276/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.655 [277/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:43.655 [278/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:43.655 [279/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:43.655 [280/710] Linking static target lib/librte_lpm.a 00:01:43.655 [281/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:43.655 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:43.655 [283/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:43.655 [284/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.655 [285/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.655 [286/710] Linking static target lib/librte_hash.a 00:01:43.918 [287/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.918 [288/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:43.918 [289/710] Linking static target lib/acl/libavx512_tmp.a 00:01:43.918 [290/710] Linking static target lib/librte_acl.a 00:01:43.918 [291/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:43.918 [292/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:43.918 [293/710] Linking static target lib/librte_reorder.a 00:01:43.918 [294/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:43.918 [295/710] Linking static target lib/librte_power.a 00:01:44.179 [296/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:44.179 [297/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.180 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:44.180 [299/710] Linking static target lib/librte_security.a 00:01:44.180 [300/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.180 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:44.180 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:44.442 [303/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.442 [304/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:44.442 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:44.442 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:44.442 [307/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.442 [308/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:44.442 [309/710] Linking static target lib/librte_mbuf.a 00:01:44.442 [310/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:44.442 [311/710] Linking static target lib/librte_rib.a 00:01:44.442 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:44.442 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:44.703 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:44.703 [315/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.703 [316/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:44.703 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:44.703 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.965 [319/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:44.965 [320/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:44.965 [321/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:44.965 [322/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:44.965 [323/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:44.965 [324/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:44.965 [325/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:44.965 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.228 [327/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:45.228 [328/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.228 [329/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.228 [330/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:45.489 [331/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.489 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:45.489 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:45.489 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:45.489 [335/710] Linking static target lib/librte_eventdev.a 00:01:45.748 [336/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:45.748 [337/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.748 [338/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:45.748 [339/710] Linking static target lib/librte_member.a 00:01:45.748 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:46.012 [341/710] Linking static target lib/librte_cryptodev.a 00:01:46.012 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:46.012 [343/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:46.012 [344/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:46.012 [345/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:46.012 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:46.012 [347/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:46.012 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:46.012 [349/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:46.272 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:46.272 [351/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:46.272 [352/710] Linking static target lib/librte_sched.a 00:01:46.272 [353/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:46.272 [354/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:46.272 [355/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:46.272 [356/710] Linking static target lib/librte_fib.a 00:01:46.272 [357/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:46.272 [358/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.272 [359/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:46.272 [360/710] Linking static target lib/librte_ethdev.a 00:01:46.534 [361/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:46.534 [362/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:46.534 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:46.534 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:46.534 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:46.534 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:46.534 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.796 [368/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:46.796 [369/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:46.796 [370/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.796 [371/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.796 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:47.064 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:47.064 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:47.064 [375/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:47.064 [376/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:47.064 [377/710] Linking static target lib/librte_pdump.a 00:01:47.328 [378/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:47.328 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:47.328 [380/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:47.328 [381/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:47.328 [382/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:47.328 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:47.328 [384/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:47.590 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:47.590 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:47.590 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:47.590 [388/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.590 [389/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:47.590 [390/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:47.590 [391/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:47.590 [392/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:47.853 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:47.853 [394/710] Linking static target lib/librte_ipsec.a 00:01:47.853 [395/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.853 [396/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:47.853 [397/710] Linking static target lib/librte_table.a 00:01:47.853 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:48.118 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:48.118 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:48.118 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:48.384 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.650 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.650 [404/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:48.650 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:48.650 [406/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.650 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.650 [408/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:48.912 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.912 [410/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:48.912 [411/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.912 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.912 [413/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:48.912 [414/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.912 [415/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:49.171 [416/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.171 [417/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:49.171 [418/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.171 [419/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:49.171 [420/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:49.171 [421/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:49.171 [422/710] Linking target lib/librte_eal.so.24.0 00:01:49.171 [423/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.171 [424/710] Linking static target drivers/librte_bus_vdev.a 00:01:49.433 [425/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:49.433 [426/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.433 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:49.433 [428/710] Linking static target lib/librte_port.a 00:01:49.433 [429/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:49.433 [430/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:49.696 [431/710] Linking target lib/librte_ring.so.24.0 00:01:49.696 [432/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:49.696 [433/710] Linking target lib/librte_meter.so.24.0 00:01:49.696 [434/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:49.696 [435/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:49.696 [436/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:49.696 [437/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.696 [438/710] Linking target lib/librte_pci.so.24.0 00:01:49.696 [439/710] Linking target lib/librte_timer.so.24.0 00:01:49.696 [440/710] Linking target lib/librte_cfgfile.so.24.0 00:01:49.696 [441/710] Linking target lib/librte_acl.so.24.0 00:01:49.957 [442/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:49.957 [443/710] Linking target lib/librte_dmadev.so.24.0 00:01:49.957 [444/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:49.957 [445/710] Linking target lib/librte_rcu.so.24.0 00:01:49.957 [446/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:49.957 [447/710] Linking target lib/librte_mempool.so.24.0 00:01:49.957 [448/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:49.957 [449/710] Linking target lib/librte_jobstats.so.24.0 00:01:49.957 [450/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.957 [451/710] Linking static target lib/librte_graph.a 00:01:49.957 [452/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:49.957 [453/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:49.957 [454/710] Linking target lib/librte_rawdev.so.24.0 00:01:49.957 [455/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:49.957 [456/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:50.222 [457/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.222 [458/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.222 [459/710] Linking target lib/librte_stack.so.24.0 00:01:50.222 [460/710] Linking static target drivers/librte_bus_pci.a 00:01:50.222 [461/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:50.222 [462/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:50.222 [463/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:50.222 [464/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:50.222 [465/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:50.222 [466/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:50.222 [467/710] Linking target lib/librte_mbuf.so.24.0 00:01:50.222 [468/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:50.222 [469/710] Linking target lib/librte_rib.so.24.0 00:01:50.487 [470/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:50.487 [471/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:50.487 [472/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:50.487 [473/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.487 [474/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.487 [475/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.487 [476/710] Linking static target drivers/librte_mempool_ring.a 00:01:50.487 [477/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:50.487 [478/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:50.487 [479/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:50.750 [480/710] Linking target lib/librte_net.so.24.0 00:01:50.750 [481/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:50.750 [482/710] Linking target lib/librte_bbdev.so.24.0 00:01:50.750 [483/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:50.750 [484/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:50.750 [485/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:50.750 [486/710] Linking target lib/librte_distributor.so.24.0 00:01:50.750 [487/710] Linking target lib/librte_compressdev.so.24.0 00:01:50.750 [488/710] Linking target lib/librte_cryptodev.so.24.0 00:01:50.750 [489/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:50.750 [490/710] Linking target lib/librte_gpudev.so.24.0 00:01:50.750 [491/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:50.750 [492/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:50.750 [493/710] Linking target lib/librte_regexdev.so.24.0 00:01:50.750 [494/710] Linking target lib/librte_mldev.so.24.0 00:01:50.750 [495/710] Linking target lib/librte_reorder.so.24.0 00:01:50.750 [496/710] Linking target lib/librte_sched.so.24.0 00:01:50.750 [497/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:50.750 [498/710] Linking target lib/librte_fib.so.24.0 00:01:50.750 [499/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:50.750 [500/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:50.750 [501/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:51.016 [502/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:51.016 [503/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.017 [504/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:51.017 [505/710] Linking target lib/librte_cmdline.so.24.0 00:01:51.017 [506/710] Linking target lib/librte_hash.so.24.0 00:01:51.017 [507/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:51.017 [508/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:51.017 [509/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:51.017 [510/710] Linking target lib/librte_security.so.24.0 00:01:51.017 [511/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:51.017 [512/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.017 [513/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:51.281 [514/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:51.281 [515/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:51.281 [516/710] Linking target lib/librte_efd.so.24.0 00:01:51.281 [517/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:51.281 [518/710] Linking target lib/librte_lpm.so.24.0 00:01:51.281 [519/710] Linking target lib/librte_member.so.24.0 00:01:51.281 [520/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:51.281 [521/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:51.281 [522/710] Linking target lib/librte_ipsec.so.24.0 00:01:51.563 [523/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:51.563 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:51.563 [525/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:51.884 [526/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:51.884 [527/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:51.884 [528/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:51.884 [529/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:51.884 [530/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:51.884 [531/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:51.884 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:52.149 [533/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:52.149 [534/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:52.149 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:52.149 [536/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:52.149 [537/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:52.415 [538/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:52.415 [539/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:52.415 [540/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:52.415 [541/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:52.681 [542/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:52.681 [543/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:52.681 [544/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:52.681 [545/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:52.942 [546/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:52.942 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:52.942 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:52.942 [549/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:52.942 [550/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:52.942 [551/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:53.204 [552/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:53.204 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:53.204 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:53.204 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:53.204 [556/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:53.469 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:53.469 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:53.469 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:53.733 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:53.993 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:53.993 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:54.331 [563/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:54.331 [564/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:54.331 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:54.331 [566/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:54.331 [567/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:54.331 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:54.331 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:54.331 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:54.591 [571/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.591 [572/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:54.591 [573/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:54.591 [574/710] Linking target lib/librte_ethdev.so.24.0 00:01:54.591 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:54.849 [576/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:54.849 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:54.849 [578/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:54.849 [579/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:54.849 [580/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:55.109 [581/710] Linking target lib/librte_metrics.so.24.0 00:01:55.109 [582/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:55.109 [583/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:55.109 [584/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:55.109 [585/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:55.109 [586/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:55.109 [587/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:55.109 [588/710] Linking target lib/librte_bpf.so.24.0 00:01:55.109 [589/710] Linking target lib/librte_gro.so.24.0 00:01:55.109 [590/710] Linking target lib/librte_gso.so.24.0 00:01:55.109 [591/710] Linking target lib/librte_eventdev.so.24.0 00:01:55.109 [592/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:55.109 [593/710] Linking static target lib/librte_pdcp.a 00:01:55.110 [594/710] Linking target lib/librte_pcapng.so.24.0 00:01:55.110 [595/710] Linking target lib/librte_ip_frag.so.24.0 00:01:55.371 [596/710] Linking target lib/librte_power.so.24.0 00:01:55.371 [597/710] Linking target lib/librte_bitratestats.so.24.0 00:01:55.371 [598/710] Linking target lib/librte_latencystats.so.24.0 00:01:55.371 [599/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:55.371 [600/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:55.371 [601/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:55.371 [602/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:55.371 [603/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:55.371 [604/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:55.633 [605/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:55.633 [606/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:55.633 [607/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:55.633 [608/710] Linking target lib/librte_pdump.so.24.0 00:01:55.633 [609/710] Linking target lib/librte_graph.so.24.0 00:01:55.633 [610/710] Linking target lib/librte_dispatcher.so.24.0 00:01:55.633 [611/710] Linking target lib/librte_port.so.24.0 00:01:55.633 [612/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:55.633 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:55.633 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:55.903 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:55.903 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:55.903 [617/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:55.903 [618/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.903 [619/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:55.903 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:55.903 [621/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:55.903 [622/710] Linking target lib/librte_pdcp.so.24.0 00:01:55.903 [623/710] Linking target lib/librte_table.so.24.0 00:01:55.903 [624/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:55.903 [625/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:55.903 [626/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:56.161 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:56.161 [628/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:56.161 [629/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:56.419 [630/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:56.677 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:56.677 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:56.677 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:56.677 [634/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:56.936 [635/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:56.936 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:56.936 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:56.936 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:56.936 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:56.936 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:57.194 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:57.194 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:57.194 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:57.453 [644/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:57.453 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:57.453 [646/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:57.453 [647/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:57.453 [648/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:57.712 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:57.712 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:57.712 [651/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:57.712 [652/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:57.712 [653/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:57.712 [654/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:57.970 [655/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:57.970 [656/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:58.228 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:58.228 [658/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:58.228 [659/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:58.228 [660/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:58.228 [661/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:58.228 [662/710] Linking static target drivers/librte_net_i40e.a 00:01:58.487 [663/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:58.487 [664/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:58.744 [665/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:58.744 [666/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.001 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:59.001 [668/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:59.001 [669/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:59.001 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:59.258 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:59.515 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:59.515 [673/710] Linking static target lib/librte_node.a 00:01:59.773 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:59.773 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.773 [676/710] Linking target lib/librte_node.so.24.0 00:02:01.144 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:01.144 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:01.144 [679/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:03.044 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:03.302 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:09.857 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.924 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.924 [684/710] Linking static target lib/librte_vhost.a 00:02:41.924 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.924 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:54.131 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:54.131 [688/710] Linking static target lib/librte_pipeline.a 00:02:54.131 [689/710] Linking target app/dpdk-test-acl 00:02:54.131 [690/710] Linking target app/dpdk-test-sad 00:02:54.131 [691/710] Linking target app/dpdk-test-dma-perf 00:02:54.131 [692/710] Linking target app/dpdk-pdump 00:02:54.131 [693/710] Linking target app/dpdk-test-pipeline 00:02:54.131 [694/710] Linking target app/dpdk-test-eventdev 00:02:54.131 [695/710] Linking target app/dpdk-test-mldev 00:02:54.131 [696/710] Linking target app/dpdk-test-compress-perf 00:02:54.131 [697/710] Linking target app/dpdk-proc-info 00:02:54.131 [698/710] Linking target app/dpdk-dumpcap 00:02:54.131 [699/710] Linking target app/dpdk-test-gpudev 00:02:54.131 [700/710] Linking target app/dpdk-test-fib 00:02:54.131 [701/710] Linking target app/dpdk-test-cmdline 00:02:54.131 [702/710] Linking target app/dpdk-test-regex 00:02:54.131 [703/710] Linking target app/dpdk-test-flow-perf 00:02:54.131 [704/710] Linking target app/dpdk-graph 00:02:54.131 [705/710] Linking target app/dpdk-test-security-perf 00:02:54.131 [706/710] Linking target app/dpdk-test-bbdev 00:02:54.131 [707/710] Linking target app/dpdk-test-crypto-perf 00:02:54.131 [708/710] Linking target app/dpdk-testpmd 00:02:55.507 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.765 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:55.765 13:38:33 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:55.765 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:55.765 [0/1] Installing files. 00:02:56.026 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:56.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:56.029 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:56.030 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:56.031 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.033 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:56.034 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:56.034 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.034 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.035 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:56.602 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:56.602 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:56.602 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:56.602 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:56.602 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.602 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:56.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:56.866 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:56.866 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:56.866 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:56.866 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:56.866 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:56.866 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:56.866 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:56.866 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:56.866 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:56.866 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:56.866 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:56.866 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:56.866 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:56.866 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:56.866 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:56.866 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:56.866 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:56.866 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:56.866 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:56.866 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:56.866 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:56.866 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:56.866 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:56.866 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:56.866 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:56.866 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:56.866 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:56.866 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:56.866 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:56.866 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:56.866 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:56.866 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:56.866 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:56.867 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:56.867 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:56.867 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:56.867 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:56.867 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:56.867 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:56.867 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:56.867 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:56.867 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:56.867 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:56.867 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:56.867 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:56.867 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:56.867 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:56.867 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:56.867 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:56.867 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:56.867 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:56.867 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:56.867 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:56.867 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:56.867 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:56.867 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:56.867 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:56.867 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:56.867 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:56.867 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:56.867 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:56.867 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:56.867 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:56.867 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:56.867 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:56.867 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:56.867 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:56.867 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:56.867 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:56.867 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:56.867 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:56.867 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:56.867 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:56.867 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:56.867 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:56.867 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:56.867 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:56.867 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:56.867 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:56.867 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:56.867 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:56.867 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:56.867 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:56.867 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:56.867 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:56.867 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:56.867 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:56.867 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:56.867 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:56.867 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:56.867 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:56.867 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:56.867 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:56.867 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:56.867 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:56.867 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:56.867 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:56.867 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:56.867 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:56.867 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:56.867 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:56.867 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:56.867 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:56.867 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:56.867 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:56.867 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:56.867 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:56.867 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:56.867 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:56.867 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:56.867 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:56.867 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:56.868 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:56.868 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:56.868 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:56.868 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:56.868 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:56.868 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:56.868 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:56.868 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:56.868 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:56.868 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:56.868 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:56.868 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:56.868 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:56.868 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:56.868 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:56.868 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:56.868 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:56.868 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:56.868 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:56.868 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:56.868 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:56.868 13:38:34 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:56.868 13:38:34 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:56.868 13:38:34 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:56.868 13:38:34 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.868 00:02:56.868 real 1m27.068s 00:02:56.868 user 17m58.297s 00:02:56.868 sys 2m5.842s 00:02:56.868 13:38:34 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:56.868 13:38:34 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:56.868 ************************************ 00:02:56.868 END TEST build_native_dpdk 00:02:56.868 ************************************ 00:02:56.868 13:38:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:56.868 13:38:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:56.868 13:38:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:56.868 13:38:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:56.868 13:38:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:56.868 13:38:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:56.868 13:38:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:56.868 13:38:34 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:56.868 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:57.126 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:57.126 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:57.126 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:57.383 Using 'verbs' RDMA provider 00:03:07.971 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:16.080 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:16.339 Creating mk/config.mk...done. 00:03:16.339 Creating mk/cc.flags.mk...done. 00:03:16.339 Type 'make' to build. 00:03:16.339 13:38:54 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:16.339 13:38:54 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:16.339 13:38:54 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:16.339 13:38:54 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.339 ************************************ 00:03:16.339 START TEST make 00:03:16.339 ************************************ 00:03:16.339 13:38:54 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:16.597 make[1]: Nothing to be done for 'all'. 00:03:18.512 The Meson build system 00:03:18.512 Version: 1.3.1 00:03:18.512 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:18.512 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:18.512 Build type: native build 00:03:18.512 Project name: libvfio-user 00:03:18.512 Project version: 0.0.1 00:03:18.512 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:18.512 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:18.512 Host machine cpu family: x86_64 00:03:18.512 Host machine cpu: x86_64 00:03:18.512 Run-time dependency threads found: YES 00:03:18.512 Library dl found: YES 00:03:18.512 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:18.512 Run-time dependency json-c found: YES 0.17 00:03:18.512 Run-time dependency cmocka found: YES 1.1.7 00:03:18.512 Program pytest-3 found: NO 00:03:18.512 Program flake8 found: NO 00:03:18.512 Program misspell-fixer found: NO 00:03:18.512 Program restructuredtext-lint found: NO 00:03:18.512 Program valgrind found: YES (/usr/bin/valgrind) 00:03:18.512 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:18.512 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:18.512 Compiler for C supports arguments -Wwrite-strings: YES 00:03:18.512 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:18.512 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:18.512 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:18.512 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:18.512 Build targets in project: 8 00:03:18.512 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:18.512 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:18.512 00:03:18.512 libvfio-user 0.0.1 00:03:18.512 00:03:18.512 User defined options 00:03:18.512 buildtype : debug 00:03:18.512 default_library: shared 00:03:18.512 libdir : /usr/local/lib 00:03:18.512 00:03:18.512 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:19.094 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:19.352 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:19.352 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:19.352 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:19.352 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:19.352 [5/37] Compiling C object samples/null.p/null.c.o 00:03:19.352 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:19.352 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:19.352 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:19.352 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:19.352 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:19.352 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:19.352 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:19.352 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:19.352 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:19.353 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:19.353 [16/37] Compiling C object samples/server.p/server.c.o 00:03:19.353 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:19.353 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:19.353 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:19.353 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:19.353 [21/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:19.353 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:19.353 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:19.353 [24/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:19.353 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:19.614 [26/37] Compiling C object samples/client.p/client.c.o 00:03:19.614 [27/37] Linking target samples/client 00:03:19.614 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:19.614 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:19.614 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:19.876 [31/37] Linking target test/unit_tests 00:03:19.876 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:19.876 [33/37] Linking target samples/gpio-pci-idio-16 00:03:19.876 [34/37] Linking target samples/server 00:03:19.876 [35/37] Linking target samples/null 00:03:19.876 [36/37] Linking target samples/lspci 00:03:19.876 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:19.876 INFO: autodetecting backend as ninja 00:03:19.876 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:20.146 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:20.721 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:20.721 ninja: no work to do. 00:03:32.911 CC lib/log/log.o 00:03:32.911 CC lib/ut_mock/mock.o 00:03:32.911 CC lib/log/log_flags.o 00:03:32.911 CC lib/log/log_deprecated.o 00:03:32.911 CC lib/ut/ut.o 00:03:32.911 LIB libspdk_log.a 00:03:32.911 LIB libspdk_ut.a 00:03:32.911 LIB libspdk_ut_mock.a 00:03:32.911 SO libspdk_ut.so.2.0 00:03:32.911 SO libspdk_log.so.7.0 00:03:32.911 SO libspdk_ut_mock.so.6.0 00:03:32.911 SYMLINK libspdk_ut.so 00:03:32.911 SYMLINK libspdk_ut_mock.so 00:03:32.911 SYMLINK libspdk_log.so 00:03:32.911 CXX lib/trace_parser/trace.o 00:03:32.911 CC lib/ioat/ioat.o 00:03:32.911 CC lib/dma/dma.o 00:03:32.911 CC lib/util/base64.o 00:03:32.911 CC lib/util/bit_array.o 00:03:32.911 CC lib/util/cpuset.o 00:03:32.911 CC lib/util/crc16.o 00:03:32.911 CC lib/util/crc32.o 00:03:32.911 CC lib/util/crc32c.o 00:03:32.911 CC lib/util/crc32_ieee.o 00:03:32.911 CC lib/util/crc64.o 00:03:32.911 CC lib/util/dif.o 00:03:32.911 CC lib/util/fd.o 00:03:32.911 CC lib/util/file.o 00:03:32.911 CC lib/util/hexlify.o 00:03:32.911 CC lib/util/iov.o 00:03:32.911 CC lib/util/math.o 00:03:32.911 CC lib/util/pipe.o 00:03:32.911 CC lib/util/strerror_tls.o 00:03:32.911 CC lib/util/string.o 00:03:32.911 CC lib/util/uuid.o 00:03:32.911 CC lib/util/fd_group.o 00:03:32.911 CC lib/util/xor.o 00:03:32.911 CC lib/util/zipf.o 00:03:33.170 CC lib/vfio_user/host/vfio_user_pci.o 00:03:33.170 CC lib/vfio_user/host/vfio_user.o 00:03:33.170 LIB libspdk_dma.a 00:03:33.170 SO libspdk_dma.so.4.0 00:03:33.170 SYMLINK libspdk_dma.so 00:03:33.170 LIB libspdk_ioat.a 00:03:33.170 SO libspdk_ioat.so.7.0 00:03:33.428 LIB libspdk_vfio_user.a 00:03:33.428 SYMLINK libspdk_ioat.so 00:03:33.428 SO libspdk_vfio_user.so.5.0 00:03:33.428 SYMLINK libspdk_vfio_user.so 00:03:33.428 LIB libspdk_util.a 00:03:33.428 SO libspdk_util.so.9.0 00:03:33.686 SYMLINK libspdk_util.so 00:03:33.944 CC lib/vmd/vmd.o 00:03:33.944 CC lib/conf/conf.o 00:03:33.944 CC lib/idxd/idxd.o 00:03:33.944 CC lib/rdma/common.o 00:03:33.944 CC lib/json/json_parse.o 00:03:33.944 CC lib/env_dpdk/env.o 00:03:33.944 CC lib/vmd/led.o 00:03:33.944 CC lib/idxd/idxd_user.o 00:03:33.944 CC lib/rdma/rdma_verbs.o 00:03:33.944 CC lib/env_dpdk/memory.o 00:03:33.944 CC lib/json/json_util.o 00:03:33.944 CC lib/idxd/idxd_kernel.o 00:03:33.944 CC lib/env_dpdk/pci.o 00:03:33.944 CC lib/json/json_write.o 00:03:33.944 CC lib/env_dpdk/init.o 00:03:33.944 CC lib/env_dpdk/threads.o 00:03:33.944 CC lib/env_dpdk/pci_ioat.o 00:03:33.944 CC lib/env_dpdk/pci_virtio.o 00:03:33.944 CC lib/env_dpdk/pci_vmd.o 00:03:33.944 CC lib/env_dpdk/pci_idxd.o 00:03:33.944 CC lib/env_dpdk/pci_event.o 00:03:33.944 CC lib/env_dpdk/sigbus_handler.o 00:03:33.944 CC lib/env_dpdk/pci_dpdk.o 00:03:33.944 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:33.944 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:33.944 LIB libspdk_trace_parser.a 00:03:33.944 SO libspdk_trace_parser.so.5.0 00:03:33.944 SYMLINK libspdk_trace_parser.so 00:03:34.224 LIB libspdk_rdma.a 00:03:34.224 LIB libspdk_json.a 00:03:34.224 LIB libspdk_conf.a 00:03:34.224 SO libspdk_rdma.so.6.0 00:03:34.224 SO libspdk_conf.so.6.0 00:03:34.224 SO libspdk_json.so.6.0 00:03:34.224 SYMLINK libspdk_conf.so 00:03:34.224 SYMLINK libspdk_rdma.so 00:03:34.224 SYMLINK libspdk_json.so 00:03:34.498 CC lib/jsonrpc/jsonrpc_server.o 00:03:34.498 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:34.498 CC lib/jsonrpc/jsonrpc_client.o 00:03:34.498 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:34.498 LIB libspdk_idxd.a 00:03:34.498 SO libspdk_idxd.so.12.0 00:03:34.498 LIB libspdk_vmd.a 00:03:34.498 SYMLINK libspdk_idxd.so 00:03:34.498 SO libspdk_vmd.so.6.0 00:03:34.756 SYMLINK libspdk_vmd.so 00:03:34.756 LIB libspdk_jsonrpc.a 00:03:34.756 SO libspdk_jsonrpc.so.6.0 00:03:34.756 SYMLINK libspdk_jsonrpc.so 00:03:35.014 CC lib/rpc/rpc.o 00:03:35.272 LIB libspdk_rpc.a 00:03:35.272 SO libspdk_rpc.so.6.0 00:03:35.272 SYMLINK libspdk_rpc.so 00:03:35.529 CC lib/trace/trace.o 00:03:35.529 CC lib/trace/trace_flags.o 00:03:35.529 CC lib/trace/trace_rpc.o 00:03:35.529 CC lib/notify/notify.o 00:03:35.529 CC lib/keyring/keyring.o 00:03:35.529 CC lib/keyring/keyring_rpc.o 00:03:35.529 CC lib/notify/notify_rpc.o 00:03:35.529 LIB libspdk_notify.a 00:03:35.529 SO libspdk_notify.so.6.0 00:03:35.529 LIB libspdk_keyring.a 00:03:35.787 SYMLINK libspdk_notify.so 00:03:35.787 LIB libspdk_trace.a 00:03:35.787 SO libspdk_keyring.so.1.0 00:03:35.787 SO libspdk_trace.so.10.0 00:03:35.787 SYMLINK libspdk_keyring.so 00:03:35.787 SYMLINK libspdk_trace.so 00:03:35.787 LIB libspdk_env_dpdk.a 00:03:36.044 CC lib/thread/thread.o 00:03:36.044 CC lib/thread/iobuf.o 00:03:36.044 CC lib/sock/sock.o 00:03:36.044 CC lib/sock/sock_rpc.o 00:03:36.044 SO libspdk_env_dpdk.so.14.0 00:03:36.044 SYMLINK libspdk_env_dpdk.so 00:03:36.303 LIB libspdk_sock.a 00:03:36.303 SO libspdk_sock.so.9.0 00:03:36.303 SYMLINK libspdk_sock.so 00:03:36.561 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:36.561 CC lib/nvme/nvme_ctrlr.o 00:03:36.561 CC lib/nvme/nvme_fabric.o 00:03:36.561 CC lib/nvme/nvme_ns_cmd.o 00:03:36.561 CC lib/nvme/nvme_ns.o 00:03:36.561 CC lib/nvme/nvme_pcie_common.o 00:03:36.561 CC lib/nvme/nvme_pcie.o 00:03:36.561 CC lib/nvme/nvme_qpair.o 00:03:36.561 CC lib/nvme/nvme.o 00:03:36.561 CC lib/nvme/nvme_quirks.o 00:03:36.561 CC lib/nvme/nvme_transport.o 00:03:36.561 CC lib/nvme/nvme_discovery.o 00:03:36.561 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:36.561 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:36.561 CC lib/nvme/nvme_tcp.o 00:03:36.561 CC lib/nvme/nvme_opal.o 00:03:36.561 CC lib/nvme/nvme_io_msg.o 00:03:36.561 CC lib/nvme/nvme_poll_group.o 00:03:36.561 CC lib/nvme/nvme_zns.o 00:03:36.561 CC lib/nvme/nvme_stubs.o 00:03:36.561 CC lib/nvme/nvme_auth.o 00:03:36.561 CC lib/nvme/nvme_cuse.o 00:03:36.561 CC lib/nvme/nvme_vfio_user.o 00:03:36.561 CC lib/nvme/nvme_rdma.o 00:03:37.496 LIB libspdk_thread.a 00:03:37.496 SO libspdk_thread.so.10.0 00:03:37.496 SYMLINK libspdk_thread.so 00:03:37.755 CC lib/init/json_config.o 00:03:37.755 CC lib/vfu_tgt/tgt_endpoint.o 00:03:37.755 CC lib/blob/blobstore.o 00:03:37.755 CC lib/accel/accel.o 00:03:37.755 CC lib/virtio/virtio.o 00:03:37.755 CC lib/vfu_tgt/tgt_rpc.o 00:03:37.755 CC lib/init/subsystem.o 00:03:37.755 CC lib/blob/request.o 00:03:37.755 CC lib/accel/accel_rpc.o 00:03:37.755 CC lib/virtio/virtio_vhost_user.o 00:03:37.755 CC lib/init/subsystem_rpc.o 00:03:37.755 CC lib/blob/zeroes.o 00:03:37.755 CC lib/init/rpc.o 00:03:37.755 CC lib/virtio/virtio_vfio_user.o 00:03:37.755 CC lib/accel/accel_sw.o 00:03:37.755 CC lib/blob/blob_bs_dev.o 00:03:37.755 CC lib/virtio/virtio_pci.o 00:03:38.013 LIB libspdk_init.a 00:03:38.013 SO libspdk_init.so.5.0 00:03:38.013 LIB libspdk_virtio.a 00:03:38.013 LIB libspdk_vfu_tgt.a 00:03:38.272 SYMLINK libspdk_init.so 00:03:38.272 SO libspdk_virtio.so.7.0 00:03:38.272 SO libspdk_vfu_tgt.so.3.0 00:03:38.272 SYMLINK libspdk_vfu_tgt.so 00:03:38.272 SYMLINK libspdk_virtio.so 00:03:38.272 CC lib/event/app.o 00:03:38.272 CC lib/event/reactor.o 00:03:38.272 CC lib/event/log_rpc.o 00:03:38.272 CC lib/event/app_rpc.o 00:03:38.272 CC lib/event/scheduler_static.o 00:03:38.839 LIB libspdk_event.a 00:03:38.839 SO libspdk_event.so.13.0 00:03:38.839 SYMLINK libspdk_event.so 00:03:38.839 LIB libspdk_accel.a 00:03:38.839 SO libspdk_accel.so.15.0 00:03:38.839 SYMLINK libspdk_accel.so 00:03:39.097 CC lib/bdev/bdev.o 00:03:39.097 CC lib/bdev/bdev_rpc.o 00:03:39.097 CC lib/bdev/bdev_zone.o 00:03:39.097 CC lib/bdev/part.o 00:03:39.097 CC lib/bdev/scsi_nvme.o 00:03:39.097 LIB libspdk_nvme.a 00:03:39.355 SO libspdk_nvme.so.13.0 00:03:39.613 SYMLINK libspdk_nvme.so 00:03:40.987 LIB libspdk_blob.a 00:03:40.987 SO libspdk_blob.so.11.0 00:03:40.987 SYMLINK libspdk_blob.so 00:03:40.987 CC lib/lvol/lvol.o 00:03:40.987 CC lib/blobfs/blobfs.o 00:03:40.987 CC lib/blobfs/tree.o 00:03:41.553 LIB libspdk_bdev.a 00:03:41.553 SO libspdk_bdev.so.15.0 00:03:41.820 SYMLINK libspdk_bdev.so 00:03:41.821 CC lib/ublk/ublk.o 00:03:41.821 CC lib/nbd/nbd.o 00:03:41.821 CC lib/scsi/dev.o 00:03:41.821 CC lib/nvmf/ctrlr.o 00:03:41.821 CC lib/nbd/nbd_rpc.o 00:03:41.821 CC lib/ublk/ublk_rpc.o 00:03:41.821 CC lib/scsi/lun.o 00:03:41.821 CC lib/nvmf/ctrlr_discovery.o 00:03:41.821 CC lib/scsi/port.o 00:03:41.821 CC lib/nvmf/ctrlr_bdev.o 00:03:41.821 CC lib/scsi/scsi.o 00:03:41.821 CC lib/ftl/ftl_core.o 00:03:41.821 CC lib/nvmf/subsystem.o 00:03:41.821 CC lib/scsi/scsi_bdev.o 00:03:41.821 CC lib/nvmf/nvmf.o 00:03:41.821 CC lib/ftl/ftl_init.o 00:03:41.821 CC lib/scsi/scsi_pr.o 00:03:41.821 CC lib/nvmf/nvmf_rpc.o 00:03:41.821 CC lib/scsi/scsi_rpc.o 00:03:41.821 CC lib/ftl/ftl_layout.o 00:03:41.821 CC lib/scsi/task.o 00:03:41.821 CC lib/ftl/ftl_debug.o 00:03:41.821 CC lib/nvmf/transport.o 00:03:41.821 CC lib/nvmf/tcp.o 00:03:41.821 CC lib/ftl/ftl_io.o 00:03:41.821 CC lib/nvmf/stubs.o 00:03:41.821 CC lib/nvmf/mdns_server.o 00:03:41.821 CC lib/ftl/ftl_l2p.o 00:03:41.821 CC lib/ftl/ftl_sb.o 00:03:41.821 CC lib/nvmf/vfio_user.o 00:03:41.821 CC lib/ftl/ftl_l2p_flat.o 00:03:41.821 CC lib/nvmf/auth.o 00:03:41.821 CC lib/nvmf/rdma.o 00:03:41.821 CC lib/ftl/ftl_nv_cache.o 00:03:41.821 CC lib/ftl/ftl_band.o 00:03:41.821 CC lib/ftl/ftl_band_ops.o 00:03:41.821 CC lib/ftl/ftl_writer.o 00:03:41.821 CC lib/ftl/ftl_rq.o 00:03:41.821 CC lib/ftl/ftl_reloc.o 00:03:41.821 CC lib/ftl/ftl_l2p_cache.o 00:03:41.821 CC lib/ftl/ftl_p2l.o 00:03:41.821 CC lib/ftl/mngt/ftl_mngt.o 00:03:41.821 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:41.821 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:41.821 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:41.821 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:42.079 LIB libspdk_blobfs.a 00:03:42.079 LIB libspdk_lvol.a 00:03:42.079 SO libspdk_blobfs.so.10.0 00:03:42.079 SO libspdk_lvol.so.10.0 00:03:42.341 SYMLINK libspdk_lvol.so 00:03:42.341 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:42.341 SYMLINK libspdk_blobfs.so 00:03:42.341 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:42.341 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:42.341 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:42.341 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:42.341 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:42.341 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:42.341 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:42.341 CC lib/ftl/utils/ftl_conf.o 00:03:42.341 CC lib/ftl/utils/ftl_md.o 00:03:42.341 CC lib/ftl/utils/ftl_mempool.o 00:03:42.341 CC lib/ftl/utils/ftl_bitmap.o 00:03:42.341 CC lib/ftl/utils/ftl_property.o 00:03:42.341 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:42.341 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:42.341 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:42.341 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:42.341 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:42.341 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:42.341 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:42.602 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:42.602 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:42.602 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:42.602 CC lib/ftl/base/ftl_base_dev.o 00:03:42.602 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:42.602 CC lib/ftl/base/ftl_base_bdev.o 00:03:42.602 CC lib/ftl/ftl_trace.o 00:03:42.602 LIB libspdk_nbd.a 00:03:42.602 SO libspdk_nbd.so.7.0 00:03:42.860 SYMLINK libspdk_nbd.so 00:03:42.860 LIB libspdk_scsi.a 00:03:42.860 SO libspdk_scsi.so.9.0 00:03:42.860 LIB libspdk_ublk.a 00:03:42.860 SYMLINK libspdk_scsi.so 00:03:43.118 SO libspdk_ublk.so.3.0 00:03:43.118 SYMLINK libspdk_ublk.so 00:03:43.118 CC lib/iscsi/conn.o 00:03:43.118 CC lib/vhost/vhost.o 00:03:43.118 CC lib/iscsi/init_grp.o 00:03:43.118 CC lib/iscsi/iscsi.o 00:03:43.118 CC lib/iscsi/md5.o 00:03:43.118 CC lib/vhost/vhost_rpc.o 00:03:43.118 CC lib/iscsi/param.o 00:03:43.118 CC lib/vhost/vhost_scsi.o 00:03:43.118 CC lib/iscsi/portal_grp.o 00:03:43.118 CC lib/vhost/vhost_blk.o 00:03:43.118 CC lib/vhost/rte_vhost_user.o 00:03:43.118 CC lib/iscsi/tgt_node.o 00:03:43.118 CC lib/iscsi/iscsi_subsystem.o 00:03:43.118 CC lib/iscsi/iscsi_rpc.o 00:03:43.118 CC lib/iscsi/task.o 00:03:43.376 LIB libspdk_ftl.a 00:03:43.376 SO libspdk_ftl.so.9.0 00:03:43.943 SYMLINK libspdk_ftl.so 00:03:44.511 LIB libspdk_vhost.a 00:03:44.511 SO libspdk_vhost.so.8.0 00:03:44.511 LIB libspdk_nvmf.a 00:03:44.511 SYMLINK libspdk_vhost.so 00:03:44.511 SO libspdk_nvmf.so.18.0 00:03:44.511 LIB libspdk_iscsi.a 00:03:44.511 SO libspdk_iscsi.so.8.0 00:03:44.769 SYMLINK libspdk_nvmf.so 00:03:44.769 SYMLINK libspdk_iscsi.so 00:03:45.027 CC module/env_dpdk/env_dpdk_rpc.o 00:03:45.027 CC module/vfu_device/vfu_virtio.o 00:03:45.027 CC module/vfu_device/vfu_virtio_blk.o 00:03:45.027 CC module/vfu_device/vfu_virtio_scsi.o 00:03:45.027 CC module/vfu_device/vfu_virtio_rpc.o 00:03:45.027 CC module/accel/error/accel_error.o 00:03:45.027 CC module/blob/bdev/blob_bdev.o 00:03:45.027 CC module/accel/ioat/accel_ioat.o 00:03:45.027 CC module/accel/error/accel_error_rpc.o 00:03:45.027 CC module/scheduler/gscheduler/gscheduler.o 00:03:45.027 CC module/keyring/file/keyring.o 00:03:45.027 CC module/accel/ioat/accel_ioat_rpc.o 00:03:45.027 CC module/keyring/file/keyring_rpc.o 00:03:45.027 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:45.027 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:45.027 CC module/keyring/linux/keyring.o 00:03:45.027 CC module/sock/posix/posix.o 00:03:45.027 CC module/keyring/linux/keyring_rpc.o 00:03:45.027 CC module/accel/iaa/accel_iaa.o 00:03:45.027 CC module/accel/dsa/accel_dsa.o 00:03:45.027 CC module/accel/iaa/accel_iaa_rpc.o 00:03:45.027 CC module/accel/dsa/accel_dsa_rpc.o 00:03:45.285 LIB libspdk_env_dpdk_rpc.a 00:03:45.285 SO libspdk_env_dpdk_rpc.so.6.0 00:03:45.286 SYMLINK libspdk_env_dpdk_rpc.so 00:03:45.286 LIB libspdk_scheduler_gscheduler.a 00:03:45.286 LIB libspdk_keyring_linux.a 00:03:45.286 LIB libspdk_keyring_file.a 00:03:45.286 LIB libspdk_scheduler_dpdk_governor.a 00:03:45.286 SO libspdk_scheduler_gscheduler.so.4.0 00:03:45.286 SO libspdk_keyring_linux.so.1.0 00:03:45.286 SO libspdk_keyring_file.so.1.0 00:03:45.286 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:45.286 LIB libspdk_accel_error.a 00:03:45.286 LIB libspdk_accel_ioat.a 00:03:45.286 LIB libspdk_scheduler_dynamic.a 00:03:45.286 LIB libspdk_accel_iaa.a 00:03:45.286 SO libspdk_accel_error.so.2.0 00:03:45.286 SYMLINK libspdk_scheduler_gscheduler.so 00:03:45.286 SO libspdk_scheduler_dynamic.so.4.0 00:03:45.286 SO libspdk_accel_ioat.so.6.0 00:03:45.286 SYMLINK libspdk_keyring_file.so 00:03:45.286 SYMLINK libspdk_keyring_linux.so 00:03:45.286 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:45.286 SO libspdk_accel_iaa.so.3.0 00:03:45.543 LIB libspdk_blob_bdev.a 00:03:45.543 LIB libspdk_accel_dsa.a 00:03:45.543 SYMLINK libspdk_scheduler_dynamic.so 00:03:45.543 SYMLINK libspdk_accel_error.so 00:03:45.543 SYMLINK libspdk_accel_ioat.so 00:03:45.543 SO libspdk_blob_bdev.so.11.0 00:03:45.543 SO libspdk_accel_dsa.so.5.0 00:03:45.543 SYMLINK libspdk_accel_iaa.so 00:03:45.543 SYMLINK libspdk_blob_bdev.so 00:03:45.543 SYMLINK libspdk_accel_dsa.so 00:03:45.801 LIB libspdk_vfu_device.a 00:03:45.801 SO libspdk_vfu_device.so.3.0 00:03:45.802 CC module/bdev/passthru/vbdev_passthru.o 00:03:45.802 CC module/bdev/aio/bdev_aio.o 00:03:45.802 CC module/bdev/lvol/vbdev_lvol.o 00:03:45.802 CC module/bdev/iscsi/bdev_iscsi.o 00:03:45.802 CC module/bdev/gpt/gpt.o 00:03:45.802 CC module/blobfs/bdev/blobfs_bdev.o 00:03:45.802 CC module/bdev/delay/vbdev_delay.o 00:03:45.802 CC module/bdev/malloc/bdev_malloc.o 00:03:45.802 CC module/bdev/nvme/bdev_nvme.o 00:03:45.802 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:45.802 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:45.802 CC module/bdev/split/vbdev_split.o 00:03:45.802 CC module/bdev/ftl/bdev_ftl.o 00:03:45.802 CC module/bdev/null/bdev_null.o 00:03:45.802 CC module/bdev/error/vbdev_error.o 00:03:45.802 CC module/bdev/aio/bdev_aio_rpc.o 00:03:45.802 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:45.802 CC module/bdev/split/vbdev_split_rpc.o 00:03:45.802 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:45.802 CC module/bdev/null/bdev_null_rpc.o 00:03:45.802 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:45.802 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:45.802 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:45.802 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:45.802 CC module/bdev/gpt/vbdev_gpt.o 00:03:45.802 CC module/bdev/error/vbdev_error_rpc.o 00:03:45.802 CC module/bdev/nvme/nvme_rpc.o 00:03:45.802 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:45.802 CC module/bdev/nvme/bdev_mdns_client.o 00:03:45.802 CC module/bdev/raid/bdev_raid.o 00:03:45.802 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:45.802 CC module/bdev/nvme/vbdev_opal.o 00:03:45.802 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:45.802 CC module/bdev/raid/bdev_raid_rpc.o 00:03:45.802 CC module/bdev/raid/bdev_raid_sb.o 00:03:45.802 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:45.802 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:45.802 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:45.802 CC module/bdev/raid/raid0.o 00:03:45.802 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:45.802 CC module/bdev/raid/raid1.o 00:03:45.802 CC module/bdev/raid/concat.o 00:03:45.802 SYMLINK libspdk_vfu_device.so 00:03:46.059 LIB libspdk_sock_posix.a 00:03:46.059 SO libspdk_sock_posix.so.6.0 00:03:46.059 LIB libspdk_blobfs_bdev.a 00:03:46.059 SO libspdk_blobfs_bdev.so.6.0 00:03:46.316 SYMLINK libspdk_sock_posix.so 00:03:46.316 SYMLINK libspdk_blobfs_bdev.so 00:03:46.316 LIB libspdk_bdev_gpt.a 00:03:46.316 LIB libspdk_bdev_error.a 00:03:46.316 LIB libspdk_bdev_split.a 00:03:46.316 SO libspdk_bdev_gpt.so.6.0 00:03:46.316 SO libspdk_bdev_error.so.6.0 00:03:46.316 SO libspdk_bdev_split.so.6.0 00:03:46.316 LIB libspdk_bdev_null.a 00:03:46.316 LIB libspdk_bdev_ftl.a 00:03:46.316 SO libspdk_bdev_null.so.6.0 00:03:46.316 LIB libspdk_bdev_passthru.a 00:03:46.316 LIB libspdk_bdev_aio.a 00:03:46.316 SYMLINK libspdk_bdev_gpt.so 00:03:46.316 SYMLINK libspdk_bdev_error.so 00:03:46.316 SYMLINK libspdk_bdev_split.so 00:03:46.316 SO libspdk_bdev_ftl.so.6.0 00:03:46.316 SO libspdk_bdev_passthru.so.6.0 00:03:46.316 SO libspdk_bdev_aio.so.6.0 00:03:46.316 LIB libspdk_bdev_zone_block.a 00:03:46.316 LIB libspdk_bdev_iscsi.a 00:03:46.316 LIB libspdk_bdev_delay.a 00:03:46.316 SYMLINK libspdk_bdev_null.so 00:03:46.316 SO libspdk_bdev_zone_block.so.6.0 00:03:46.316 SO libspdk_bdev_iscsi.so.6.0 00:03:46.316 SYMLINK libspdk_bdev_ftl.so 00:03:46.316 SO libspdk_bdev_delay.so.6.0 00:03:46.316 SYMLINK libspdk_bdev_passthru.so 00:03:46.316 LIB libspdk_bdev_malloc.a 00:03:46.316 SYMLINK libspdk_bdev_aio.so 00:03:46.316 SYMLINK libspdk_bdev_zone_block.so 00:03:46.316 SO libspdk_bdev_malloc.so.6.0 00:03:46.316 SYMLINK libspdk_bdev_iscsi.so 00:03:46.316 SYMLINK libspdk_bdev_delay.so 00:03:46.574 LIB libspdk_bdev_lvol.a 00:03:46.574 SYMLINK libspdk_bdev_malloc.so 00:03:46.574 LIB libspdk_bdev_virtio.a 00:03:46.574 SO libspdk_bdev_lvol.so.6.0 00:03:46.574 SO libspdk_bdev_virtio.so.6.0 00:03:46.574 SYMLINK libspdk_bdev_lvol.so 00:03:46.574 SYMLINK libspdk_bdev_virtio.so 00:03:46.831 LIB libspdk_bdev_raid.a 00:03:46.831 SO libspdk_bdev_raid.so.6.0 00:03:47.087 SYMLINK libspdk_bdev_raid.so 00:03:48.018 LIB libspdk_bdev_nvme.a 00:03:48.275 SO libspdk_bdev_nvme.so.7.0 00:03:48.275 SYMLINK libspdk_bdev_nvme.so 00:03:48.533 CC module/event/subsystems/iobuf/iobuf.o 00:03:48.533 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:48.533 CC module/event/subsystems/scheduler/scheduler.o 00:03:48.533 CC module/event/subsystems/vmd/vmd.o 00:03:48.533 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:48.533 CC module/event/subsystems/keyring/keyring.o 00:03:48.533 CC module/event/subsystems/sock/sock.o 00:03:48.533 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:48.533 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:48.792 LIB libspdk_event_keyring.a 00:03:48.792 LIB libspdk_event_vhost_blk.a 00:03:48.792 LIB libspdk_event_sock.a 00:03:48.792 LIB libspdk_event_scheduler.a 00:03:48.792 LIB libspdk_event_vfu_tgt.a 00:03:48.792 LIB libspdk_event_vmd.a 00:03:48.792 LIB libspdk_event_iobuf.a 00:03:48.792 SO libspdk_event_sock.so.5.0 00:03:48.792 SO libspdk_event_scheduler.so.4.0 00:03:48.792 SO libspdk_event_vhost_blk.so.3.0 00:03:48.792 SO libspdk_event_keyring.so.1.0 00:03:48.792 SO libspdk_event_vfu_tgt.so.3.0 00:03:48.792 SO libspdk_event_vmd.so.6.0 00:03:48.792 SO libspdk_event_iobuf.so.3.0 00:03:48.792 SYMLINK libspdk_event_keyring.so 00:03:48.792 SYMLINK libspdk_event_sock.so 00:03:48.792 SYMLINK libspdk_event_vfu_tgt.so 00:03:48.792 SYMLINK libspdk_event_vhost_blk.so 00:03:48.792 SYMLINK libspdk_event_scheduler.so 00:03:48.792 SYMLINK libspdk_event_vmd.so 00:03:48.792 SYMLINK libspdk_event_iobuf.so 00:03:49.049 CC module/event/subsystems/accel/accel.o 00:03:49.305 LIB libspdk_event_accel.a 00:03:49.305 SO libspdk_event_accel.so.6.0 00:03:49.305 SYMLINK libspdk_event_accel.so 00:03:49.562 CC module/event/subsystems/bdev/bdev.o 00:03:49.562 LIB libspdk_event_bdev.a 00:03:49.562 SO libspdk_event_bdev.so.6.0 00:03:49.819 SYMLINK libspdk_event_bdev.so 00:03:49.819 CC module/event/subsystems/nbd/nbd.o 00:03:49.819 CC module/event/subsystems/ublk/ublk.o 00:03:49.819 CC module/event/subsystems/scsi/scsi.o 00:03:49.819 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:49.819 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:50.076 LIB libspdk_event_nbd.a 00:03:50.076 LIB libspdk_event_ublk.a 00:03:50.076 LIB libspdk_event_scsi.a 00:03:50.076 SO libspdk_event_ublk.so.3.0 00:03:50.076 SO libspdk_event_nbd.so.6.0 00:03:50.076 SO libspdk_event_scsi.so.6.0 00:03:50.076 SYMLINK libspdk_event_nbd.so 00:03:50.076 SYMLINK libspdk_event_ublk.so 00:03:50.076 SYMLINK libspdk_event_scsi.so 00:03:50.076 LIB libspdk_event_nvmf.a 00:03:50.076 SO libspdk_event_nvmf.so.6.0 00:03:50.076 SYMLINK libspdk_event_nvmf.so 00:03:50.334 CC module/event/subsystems/iscsi/iscsi.o 00:03:50.334 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:50.334 LIB libspdk_event_vhost_scsi.a 00:03:50.334 LIB libspdk_event_iscsi.a 00:03:50.334 SO libspdk_event_vhost_scsi.so.3.0 00:03:50.334 SO libspdk_event_iscsi.so.6.0 00:03:50.592 SYMLINK libspdk_event_vhost_scsi.so 00:03:50.592 SYMLINK libspdk_event_iscsi.so 00:03:50.592 SO libspdk.so.6.0 00:03:50.592 SYMLINK libspdk.so 00:03:50.855 CXX app/trace/trace.o 00:03:50.855 CC app/spdk_nvme_perf/perf.o 00:03:50.855 CC app/trace_record/trace_record.o 00:03:50.855 TEST_HEADER include/spdk/accel.h 00:03:50.855 TEST_HEADER include/spdk/accel_module.h 00:03:50.855 CC test/rpc_client/rpc_client_test.o 00:03:50.855 TEST_HEADER include/spdk/assert.h 00:03:50.855 CC app/spdk_top/spdk_top.o 00:03:50.855 CC app/spdk_lspci/spdk_lspci.o 00:03:50.855 TEST_HEADER include/spdk/barrier.h 00:03:50.855 TEST_HEADER include/spdk/base64.h 00:03:50.855 CC app/spdk_nvme_discover/discovery_aer.o 00:03:50.855 CC app/spdk_nvme_identify/identify.o 00:03:50.855 TEST_HEADER include/spdk/bdev.h 00:03:50.855 TEST_HEADER include/spdk/bdev_module.h 00:03:50.855 TEST_HEADER include/spdk/bdev_zone.h 00:03:50.855 TEST_HEADER include/spdk/bit_array.h 00:03:50.855 TEST_HEADER include/spdk/bit_pool.h 00:03:50.855 TEST_HEADER include/spdk/blob_bdev.h 00:03:50.855 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:50.855 TEST_HEADER include/spdk/blobfs.h 00:03:50.855 TEST_HEADER include/spdk/blob.h 00:03:50.855 TEST_HEADER include/spdk/conf.h 00:03:50.855 TEST_HEADER include/spdk/config.h 00:03:50.855 TEST_HEADER include/spdk/cpuset.h 00:03:50.855 TEST_HEADER include/spdk/crc16.h 00:03:50.855 TEST_HEADER include/spdk/crc32.h 00:03:50.855 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:50.855 TEST_HEADER include/spdk/crc64.h 00:03:50.855 TEST_HEADER include/spdk/dif.h 00:03:50.855 CC app/spdk_dd/spdk_dd.o 00:03:50.855 TEST_HEADER include/spdk/dma.h 00:03:50.855 TEST_HEADER include/spdk/endian.h 00:03:50.855 TEST_HEADER include/spdk/env_dpdk.h 00:03:50.855 CC app/iscsi_tgt/iscsi_tgt.o 00:03:50.855 TEST_HEADER include/spdk/env.h 00:03:50.855 CC app/nvmf_tgt/nvmf_main.o 00:03:50.855 TEST_HEADER include/spdk/event.h 00:03:50.855 TEST_HEADER include/spdk/fd_group.h 00:03:50.855 TEST_HEADER include/spdk/fd.h 00:03:50.855 TEST_HEADER include/spdk/file.h 00:03:50.855 TEST_HEADER include/spdk/ftl.h 00:03:50.855 CC app/vhost/vhost.o 00:03:50.855 TEST_HEADER include/spdk/gpt_spec.h 00:03:50.855 TEST_HEADER include/spdk/hexlify.h 00:03:50.855 TEST_HEADER include/spdk/histogram_data.h 00:03:50.855 TEST_HEADER include/spdk/idxd.h 00:03:50.855 TEST_HEADER include/spdk/idxd_spec.h 00:03:50.855 TEST_HEADER include/spdk/init.h 00:03:50.855 TEST_HEADER include/spdk/ioat.h 00:03:50.855 TEST_HEADER include/spdk/ioat_spec.h 00:03:50.855 TEST_HEADER include/spdk/iscsi_spec.h 00:03:50.855 CC examples/ioat/verify/verify.o 00:03:50.855 CC test/env/vtophys/vtophys.o 00:03:50.855 TEST_HEADER include/spdk/json.h 00:03:50.855 CC app/spdk_tgt/spdk_tgt.o 00:03:50.855 CC examples/ioat/perf/perf.o 00:03:50.855 TEST_HEADER include/spdk/jsonrpc.h 00:03:50.855 CC examples/vmd/lsvmd/lsvmd.o 00:03:50.855 TEST_HEADER include/spdk/keyring.h 00:03:50.855 CC examples/sock/hello_world/hello_sock.o 00:03:50.855 CC test/env/memory/memory_ut.o 00:03:51.124 TEST_HEADER include/spdk/keyring_module.h 00:03:51.124 CC examples/util/zipf/zipf.o 00:03:51.124 CC examples/idxd/perf/perf.o 00:03:51.124 CC examples/nvme/hello_world/hello_world.o 00:03:51.124 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:51.124 TEST_HEADER include/spdk/likely.h 00:03:51.124 CC examples/accel/perf/accel_perf.o 00:03:51.124 CC test/env/pci/pci_ut.o 00:03:51.124 CC test/app/histogram_perf/histogram_perf.o 00:03:51.124 CC test/event/event_perf/event_perf.o 00:03:51.124 TEST_HEADER include/spdk/log.h 00:03:51.124 CC test/event/reactor_perf/reactor_perf.o 00:03:51.124 CC test/event/reactor/reactor.o 00:03:51.124 TEST_HEADER include/spdk/lvol.h 00:03:51.124 TEST_HEADER include/spdk/memory.h 00:03:51.124 CC app/fio/nvme/fio_plugin.o 00:03:51.124 TEST_HEADER include/spdk/mmio.h 00:03:51.124 TEST_HEADER include/spdk/nbd.h 00:03:51.124 CC test/thread/poller_perf/poller_perf.o 00:03:51.124 CC test/nvme/aer/aer.o 00:03:51.124 TEST_HEADER include/spdk/notify.h 00:03:51.124 TEST_HEADER include/spdk/nvme.h 00:03:51.124 TEST_HEADER include/spdk/nvme_intel.h 00:03:51.124 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:51.124 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:51.124 TEST_HEADER include/spdk/nvme_spec.h 00:03:51.124 TEST_HEADER include/spdk/nvme_zns.h 00:03:51.124 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:51.124 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:51.124 CC test/bdev/bdevio/bdevio.o 00:03:51.124 TEST_HEADER include/spdk/nvmf.h 00:03:51.124 TEST_HEADER include/spdk/nvmf_spec.h 00:03:51.124 CC test/blobfs/mkfs/mkfs.o 00:03:51.124 TEST_HEADER include/spdk/nvmf_transport.h 00:03:51.124 CC examples/nvmf/nvmf/nvmf.o 00:03:51.124 TEST_HEADER include/spdk/opal.h 00:03:51.124 CC test/accel/dif/dif.o 00:03:51.124 TEST_HEADER include/spdk/opal_spec.h 00:03:51.124 TEST_HEADER include/spdk/pci_ids.h 00:03:51.124 CC app/fio/bdev/fio_plugin.o 00:03:51.124 CC examples/bdev/hello_world/hello_bdev.o 00:03:51.124 CC examples/thread/thread/thread_ex.o 00:03:51.124 TEST_HEADER include/spdk/pipe.h 00:03:51.124 CC test/app/bdev_svc/bdev_svc.o 00:03:51.124 TEST_HEADER include/spdk/queue.h 00:03:51.124 TEST_HEADER include/spdk/reduce.h 00:03:51.124 CC examples/blob/hello_world/hello_blob.o 00:03:51.124 TEST_HEADER include/spdk/rpc.h 00:03:51.124 TEST_HEADER include/spdk/scheduler.h 00:03:51.124 CC test/dma/test_dma/test_dma.o 00:03:51.124 TEST_HEADER include/spdk/scsi.h 00:03:51.124 CC examples/bdev/bdevperf/bdevperf.o 00:03:51.124 TEST_HEADER include/spdk/scsi_spec.h 00:03:51.124 TEST_HEADER include/spdk/sock.h 00:03:51.124 TEST_HEADER include/spdk/stdinc.h 00:03:51.124 TEST_HEADER include/spdk/string.h 00:03:51.124 TEST_HEADER include/spdk/thread.h 00:03:51.124 TEST_HEADER include/spdk/trace.h 00:03:51.124 TEST_HEADER include/spdk/trace_parser.h 00:03:51.124 TEST_HEADER include/spdk/tree.h 00:03:51.124 TEST_HEADER include/spdk/ublk.h 00:03:51.124 CC test/env/mem_callbacks/mem_callbacks.o 00:03:51.124 TEST_HEADER include/spdk/util.h 00:03:51.124 TEST_HEADER include/spdk/uuid.h 00:03:51.124 TEST_HEADER include/spdk/version.h 00:03:51.124 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:51.124 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:51.124 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:51.124 TEST_HEADER include/spdk/vhost.h 00:03:51.124 TEST_HEADER include/spdk/vmd.h 00:03:51.124 TEST_HEADER include/spdk/xor.h 00:03:51.124 LINK spdk_lspci 00:03:51.124 TEST_HEADER include/spdk/zipf.h 00:03:51.124 CXX test/cpp_headers/accel.o 00:03:51.124 CC test/lvol/esnap/esnap.o 00:03:51.124 LINK rpc_client_test 00:03:51.385 LINK spdk_nvme_discover 00:03:51.385 LINK lsvmd 00:03:51.385 LINK interrupt_tgt 00:03:51.385 LINK vtophys 00:03:51.385 LINK event_perf 00:03:51.385 LINK reactor_perf 00:03:51.385 LINK reactor 00:03:51.385 LINK zipf 00:03:51.385 LINK nvmf_tgt 00:03:51.385 LINK histogram_perf 00:03:51.385 LINK poller_perf 00:03:51.385 LINK env_dpdk_post_init 00:03:51.385 LINK vhost 00:03:51.385 LINK spdk_trace_record 00:03:51.385 LINK iscsi_tgt 00:03:51.385 LINK verify 00:03:51.385 LINK spdk_tgt 00:03:51.385 LINK ioat_perf 00:03:51.385 LINK hello_world 00:03:51.385 LINK bdev_svc 00:03:51.385 LINK hello_sock 00:03:51.385 LINK mkfs 00:03:51.649 CXX test/cpp_headers/accel_module.o 00:03:51.649 LINK hello_blob 00:03:51.649 LINK hello_bdev 00:03:51.649 LINK aer 00:03:51.649 CXX test/cpp_headers/assert.o 00:03:51.649 LINK thread 00:03:51.649 LINK spdk_dd 00:03:51.649 CXX test/cpp_headers/barrier.o 00:03:51.649 LINK idxd_perf 00:03:51.649 CXX test/cpp_headers/base64.o 00:03:51.649 CC test/event/app_repeat/app_repeat.o 00:03:51.649 LINK nvmf 00:03:51.649 CC examples/vmd/led/led.o 00:03:51.649 LINK pci_ut 00:03:51.649 LINK spdk_trace 00:03:51.649 CC test/app/jsoncat/jsoncat.o 00:03:51.909 CC examples/nvme/reconnect/reconnect.o 00:03:51.909 CC test/nvme/reset/reset.o 00:03:51.909 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:51.909 CXX test/cpp_headers/bdev.o 00:03:51.909 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:51.909 CC test/app/stub/stub.o 00:03:51.909 LINK bdevio 00:03:51.909 LINK test_dma 00:03:51.909 CC examples/blob/cli/blobcli.o 00:03:51.909 CC examples/nvme/arbitration/arbitration.o 00:03:51.909 CC examples/nvme/hotplug/hotplug.o 00:03:51.909 LINK dif 00:03:51.909 CC test/nvme/sgl/sgl.o 00:03:51.909 LINK accel_perf 00:03:51.909 CC test/nvme/e2edp/nvme_dp.o 00:03:51.909 CC test/event/scheduler/scheduler.o 00:03:51.909 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:51.909 LINK nvme_fuzz 00:03:51.909 CC test/nvme/overhead/overhead.o 00:03:51.909 CC test/nvme/err_injection/err_injection.o 00:03:51.909 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:51.909 CXX test/cpp_headers/bdev_module.o 00:03:51.909 LINK led 00:03:51.909 LINK app_repeat 00:03:52.185 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:52.185 CC test/nvme/startup/startup.o 00:03:52.185 LINK spdk_bdev 00:03:52.185 CC test/nvme/reserve/reserve.o 00:03:52.185 LINK spdk_nvme 00:03:52.185 LINK jsoncat 00:03:52.185 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:52.185 CC examples/nvme/abort/abort.o 00:03:52.185 CC test/nvme/connect_stress/connect_stress.o 00:03:52.185 CC test/nvme/simple_copy/simple_copy.o 00:03:52.185 CC test/nvme/boot_partition/boot_partition.o 00:03:52.185 CC test/nvme/fused_ordering/fused_ordering.o 00:03:52.185 LINK stub 00:03:52.185 CC test/nvme/compliance/nvme_compliance.o 00:03:52.185 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:52.185 CXX test/cpp_headers/bdev_zone.o 00:03:52.185 CXX test/cpp_headers/bit_array.o 00:03:52.185 CXX test/cpp_headers/bit_pool.o 00:03:52.185 CXX test/cpp_headers/blob_bdev.o 00:03:52.185 CXX test/cpp_headers/blobfs_bdev.o 00:03:52.185 CC test/nvme/cuse/cuse.o 00:03:52.185 CXX test/cpp_headers/blobfs.o 00:03:52.185 CXX test/cpp_headers/blob.o 00:03:52.185 CXX test/cpp_headers/conf.o 00:03:52.185 CXX test/cpp_headers/config.o 00:03:52.185 CXX test/cpp_headers/cpuset.o 00:03:52.185 CC test/nvme/fdp/fdp.o 00:03:52.475 CXX test/cpp_headers/crc16.o 00:03:52.475 LINK mem_callbacks 00:03:52.475 LINK reset 00:03:52.475 CXX test/cpp_headers/crc32.o 00:03:52.475 CXX test/cpp_headers/crc64.o 00:03:52.475 CXX test/cpp_headers/dif.o 00:03:52.475 LINK hotplug 00:03:52.475 LINK err_injection 00:03:52.475 LINK startup 00:03:52.475 LINK scheduler 00:03:52.475 LINK spdk_nvme_perf 00:03:52.475 LINK cmb_copy 00:03:52.475 LINK pmr_persistence 00:03:52.475 LINK sgl 00:03:52.475 CXX test/cpp_headers/dma.o 00:03:52.475 LINK nvme_dp 00:03:52.475 LINK spdk_nvme_identify 00:03:52.475 LINK reserve 00:03:52.475 CXX test/cpp_headers/endian.o 00:03:52.475 LINK boot_partition 00:03:52.475 LINK reconnect 00:03:52.475 LINK connect_stress 00:03:52.475 LINK arbitration 00:03:52.475 LINK bdevperf 00:03:52.475 LINK spdk_top 00:03:52.475 LINK overhead 00:03:52.475 LINK fused_ordering 00:03:52.758 LINK simple_copy 00:03:52.758 CXX test/cpp_headers/env_dpdk.o 00:03:52.758 CXX test/cpp_headers/env.o 00:03:52.758 CXX test/cpp_headers/event.o 00:03:52.758 LINK doorbell_aers 00:03:52.758 CXX test/cpp_headers/fd_group.o 00:03:52.758 CXX test/cpp_headers/fd.o 00:03:52.758 CXX test/cpp_headers/file.o 00:03:52.758 CXX test/cpp_headers/ftl.o 00:03:52.758 CXX test/cpp_headers/gpt_spec.o 00:03:52.758 CXX test/cpp_headers/hexlify.o 00:03:52.758 CXX test/cpp_headers/histogram_data.o 00:03:52.758 CXX test/cpp_headers/idxd.o 00:03:52.758 CXX test/cpp_headers/idxd_spec.o 00:03:52.758 CXX test/cpp_headers/init.o 00:03:52.758 CXX test/cpp_headers/ioat.o 00:03:52.758 CXX test/cpp_headers/ioat_spec.o 00:03:52.758 LINK nvme_manage 00:03:52.758 CXX test/cpp_headers/iscsi_spec.o 00:03:52.758 CXX test/cpp_headers/json.o 00:03:52.758 CXX test/cpp_headers/jsonrpc.o 00:03:52.758 CXX test/cpp_headers/keyring.o 00:03:52.758 LINK vhost_fuzz 00:03:52.758 CXX test/cpp_headers/keyring_module.o 00:03:52.758 CXX test/cpp_headers/likely.o 00:03:52.758 LINK blobcli 00:03:52.758 CXX test/cpp_headers/log.o 00:03:52.758 CXX test/cpp_headers/lvol.o 00:03:52.758 CXX test/cpp_headers/memory.o 00:03:52.758 CXX test/cpp_headers/mmio.o 00:03:52.758 CXX test/cpp_headers/nbd.o 00:03:52.758 CXX test/cpp_headers/notify.o 00:03:52.758 CXX test/cpp_headers/nvme.o 00:03:52.758 CXX test/cpp_headers/nvme_intel.o 00:03:52.758 CXX test/cpp_headers/nvme_ocssd.o 00:03:52.758 LINK abort 00:03:52.758 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:52.758 CXX test/cpp_headers/nvme_spec.o 00:03:52.758 CXX test/cpp_headers/nvme_zns.o 00:03:52.758 CXX test/cpp_headers/nvmf_cmd.o 00:03:53.021 LINK nvme_compliance 00:03:53.021 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:53.021 CXX test/cpp_headers/nvmf.o 00:03:53.021 CXX test/cpp_headers/nvmf_spec.o 00:03:53.021 CXX test/cpp_headers/nvmf_transport.o 00:03:53.021 CXX test/cpp_headers/opal.o 00:03:53.021 LINK fdp 00:03:53.021 CXX test/cpp_headers/opal_spec.o 00:03:53.021 CXX test/cpp_headers/pci_ids.o 00:03:53.021 CXX test/cpp_headers/pipe.o 00:03:53.021 CXX test/cpp_headers/queue.o 00:03:53.021 CXX test/cpp_headers/reduce.o 00:03:53.021 CXX test/cpp_headers/rpc.o 00:03:53.021 CXX test/cpp_headers/scheduler.o 00:03:53.021 CXX test/cpp_headers/scsi.o 00:03:53.021 CXX test/cpp_headers/scsi_spec.o 00:03:53.021 CXX test/cpp_headers/sock.o 00:03:53.021 CXX test/cpp_headers/stdinc.o 00:03:53.021 CXX test/cpp_headers/string.o 00:03:53.021 CXX test/cpp_headers/thread.o 00:03:53.021 CXX test/cpp_headers/trace.o 00:03:53.021 CXX test/cpp_headers/trace_parser.o 00:03:53.279 CXX test/cpp_headers/tree.o 00:03:53.279 LINK memory_ut 00:03:53.279 CXX test/cpp_headers/ublk.o 00:03:53.279 CXX test/cpp_headers/util.o 00:03:53.279 CXX test/cpp_headers/uuid.o 00:03:53.279 CXX test/cpp_headers/version.o 00:03:53.279 CXX test/cpp_headers/vfio_user_pci.o 00:03:53.279 CXX test/cpp_headers/vfio_user_spec.o 00:03:53.279 CXX test/cpp_headers/vhost.o 00:03:53.279 CXX test/cpp_headers/vmd.o 00:03:53.279 CXX test/cpp_headers/xor.o 00:03:53.279 CXX test/cpp_headers/zipf.o 00:03:54.212 LINK cuse 00:03:54.212 LINK iscsi_fuzz 00:03:57.504 LINK esnap 00:03:57.504 00:03:57.504 real 0m41.006s 00:03:57.504 user 7m36.649s 00:03:57.504 sys 1m50.057s 00:03:57.504 13:39:35 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:57.504 13:39:35 make -- common/autotest_common.sh@10 -- $ set +x 00:03:57.504 ************************************ 00:03:57.504 END TEST make 00:03:57.504 ************************************ 00:03:57.504 13:39:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:57.504 13:39:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:57.504 13:39:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:57.504 13:39:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.504 13:39:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:57.504 13:39:35 -- pm/common@44 -- $ pid=1215872 00:03:57.504 13:39:35 -- pm/common@50 -- $ kill -TERM 1215872 00:03:57.504 13:39:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.504 13:39:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:57.504 13:39:35 -- pm/common@44 -- $ pid=1215874 00:03:57.504 13:39:35 -- pm/common@50 -- $ kill -TERM 1215874 00:03:57.504 13:39:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.504 13:39:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:57.504 13:39:35 -- pm/common@44 -- $ pid=1215876 00:03:57.504 13:39:35 -- pm/common@50 -- $ kill -TERM 1215876 00:03:57.504 13:39:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.504 13:39:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:57.504 13:39:35 -- pm/common@44 -- $ pid=1215907 00:03:57.504 13:39:35 -- pm/common@50 -- $ sudo -E kill -TERM 1215907 00:03:57.504 13:39:35 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.504 13:39:35 -- nvmf/common.sh@7 -- # uname -s 00:03:57.504 13:39:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.504 13:39:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.504 13:39:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.504 13:39:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.504 13:39:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.504 13:39:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.504 13:39:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.504 13:39:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.504 13:39:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.504 13:39:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.505 13:39:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.505 13:39:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.505 13:39:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.505 13:39:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.505 13:39:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:57.505 13:39:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.505 13:39:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.505 13:39:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.505 13:39:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.505 13:39:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.505 13:39:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.505 13:39:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.505 13:39:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.505 13:39:35 -- paths/export.sh@5 -- # export PATH 00:03:57.505 13:39:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.505 13:39:35 -- nvmf/common.sh@47 -- # : 0 00:03:57.505 13:39:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:57.505 13:39:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:57.505 13:39:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.505 13:39:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.505 13:39:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.505 13:39:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:57.505 13:39:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:57.505 13:39:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:57.505 13:39:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:57.505 13:39:35 -- spdk/autotest.sh@32 -- # uname -s 00:03:57.505 13:39:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:57.505 13:39:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:57.505 13:39:35 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.505 13:39:35 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:57.505 13:39:35 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.505 13:39:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:57.505 13:39:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:57.505 13:39:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:57.505 13:39:35 -- spdk/autotest.sh@48 -- # udevadm_pid=1292410 00:03:57.505 13:39:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:57.505 13:39:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:57.505 13:39:35 -- pm/common@17 -- # local monitor 00:03:57.505 13:39:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.505 13:39:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.505 13:39:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.505 13:39:35 -- pm/common@21 -- # date +%s 00:03:57.505 13:39:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.505 13:39:35 -- pm/common@21 -- # date +%s 00:03:57.505 13:39:35 -- pm/common@25 -- # sleep 1 00:03:57.505 13:39:35 -- pm/common@21 -- # date +%s 00:03:57.505 13:39:35 -- pm/common@21 -- # date +%s 00:03:57.505 13:39:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720957175 00:03:57.505 13:39:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720957175 00:03:57.505 13:39:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720957175 00:03:57.505 13:39:35 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720957175 00:03:57.505 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720957175_collect-vmstat.pm.log 00:03:57.505 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720957175_collect-cpu-load.pm.log 00:03:57.505 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720957175_collect-cpu-temp.pm.log 00:03:57.505 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720957175_collect-bmc-pm.bmc.pm.log 00:03:58.878 13:39:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:58.878 13:39:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:58.878 13:39:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:58.878 13:39:36 -- common/autotest_common.sh@10 -- # set +x 00:03:58.878 13:39:36 -- spdk/autotest.sh@59 -- # create_test_list 00:03:58.878 13:39:36 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:58.878 13:39:36 -- common/autotest_common.sh@10 -- # set +x 00:03:58.878 13:39:36 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:58.878 13:39:36 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.878 13:39:36 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.878 13:39:36 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:58.878 13:39:36 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.878 13:39:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:58.878 13:39:36 -- common/autotest_common.sh@1451 -- # uname 00:03:58.878 13:39:36 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:58.878 13:39:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:58.878 13:39:36 -- common/autotest_common.sh@1471 -- # uname 00:03:58.878 13:39:36 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:58.878 13:39:36 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:58.878 13:39:36 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:58.878 13:39:36 -- spdk/autotest.sh@72 -- # hash lcov 00:03:58.878 13:39:36 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:58.878 13:39:36 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:58.878 --rc lcov_branch_coverage=1 00:03:58.878 --rc lcov_function_coverage=1 00:03:58.878 --rc genhtml_branch_coverage=1 00:03:58.878 --rc genhtml_function_coverage=1 00:03:58.878 --rc genhtml_legend=1 00:03:58.878 --rc geninfo_all_blocks=1 00:03:58.878 ' 00:03:58.878 13:39:36 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:58.878 --rc lcov_branch_coverage=1 00:03:58.878 --rc lcov_function_coverage=1 00:03:58.878 --rc genhtml_branch_coverage=1 00:03:58.878 --rc genhtml_function_coverage=1 00:03:58.878 --rc genhtml_legend=1 00:03:58.878 --rc geninfo_all_blocks=1 00:03:58.878 ' 00:03:58.878 13:39:36 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:58.878 --rc lcov_branch_coverage=1 00:03:58.878 --rc lcov_function_coverage=1 00:03:58.878 --rc genhtml_branch_coverage=1 00:03:58.878 --rc genhtml_function_coverage=1 00:03:58.878 --rc genhtml_legend=1 00:03:58.878 --rc geninfo_all_blocks=1 00:03:58.878 --no-external' 00:03:58.878 13:39:36 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:58.879 --rc lcov_branch_coverage=1 00:03:58.879 --rc lcov_function_coverage=1 00:03:58.879 --rc genhtml_branch_coverage=1 00:03:58.879 --rc genhtml_function_coverage=1 00:03:58.879 --rc genhtml_legend=1 00:03:58.879 --rc geninfo_all_blocks=1 00:03:58.879 --no-external' 00:03:58.879 13:39:36 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:58.879 lcov: LCOV version 1.14 00:03:58.879 13:39:36 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:13.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:13.744 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:28.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:28.612 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:28.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:28.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:28.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:28.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:28.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:28.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:28.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:28.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:28.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:28.614 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:31.908 13:40:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:31.908 13:40:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:31.908 13:40:09 -- common/autotest_common.sh@10 -- # set +x 00:04:31.908 13:40:09 -- spdk/autotest.sh@91 -- # rm -f 00:04:31.908 13:40:09 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.845 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:32.845 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:32.845 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:32.845 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:32.845 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:32.845 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:32.845 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:32.845 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:32.845 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:32.845 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:32.845 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:32.845 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:32.845 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:32.845 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:32.845 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:32.845 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:32.845 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:32.845 13:40:10 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:32.845 13:40:10 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:32.845 13:40:10 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:32.845 13:40:10 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:32.845 13:40:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:32.845 13:40:10 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:32.845 13:40:10 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:32.845 13:40:10 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:32.845 13:40:10 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:32.845 13:40:10 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:32.845 13:40:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:32.845 13:40:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:32.845 13:40:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:32.845 13:40:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:32.845 13:40:10 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:33.105 No valid GPT data, bailing 00:04:33.105 13:40:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.105 13:40:10 -- scripts/common.sh@391 -- # pt= 00:04:33.105 13:40:10 -- scripts/common.sh@392 -- # return 1 00:04:33.105 13:40:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:33.105 1+0 records in 00:04:33.105 1+0 records out 00:04:33.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00228455 s, 459 MB/s 00:04:33.105 13:40:10 -- spdk/autotest.sh@118 -- # sync 00:04:33.105 13:40:10 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:33.105 13:40:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:33.105 13:40:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:35.009 13:40:12 -- spdk/autotest.sh@124 -- # uname -s 00:04:35.009 13:40:12 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:35.009 13:40:12 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:35.009 13:40:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.009 13:40:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.009 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:04:35.009 ************************************ 00:04:35.009 START TEST setup.sh 00:04:35.009 ************************************ 00:04:35.009 13:40:12 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:35.009 * Looking for test storage... 00:04:35.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:35.009 13:40:12 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:35.009 13:40:12 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:35.009 13:40:12 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:35.010 13:40:12 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.010 13:40:12 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.010 13:40:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:35.010 ************************************ 00:04:35.010 START TEST acl 00:04:35.010 ************************************ 00:04:35.010 13:40:12 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:35.010 * Looking for test storage... 00:04:35.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:35.010 13:40:12 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:35.010 13:40:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:35.010 13:40:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:35.010 13:40:12 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:35.010 13:40:12 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:35.010 13:40:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:35.010 13:40:12 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:35.010 13:40:12 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.010 13:40:12 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:35.010 13:40:12 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:35.010 13:40:12 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:35.010 13:40:12 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:35.010 13:40:12 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:35.010 13:40:12 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:35.010 13:40:12 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.010 13:40:12 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.427 13:40:14 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:36.427 13:40:14 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:36.427 13:40:14 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:36.427 13:40:14 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:36.427 13:40:14 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.427 13:40:14 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:37.358 Hugepages 00:04:37.358 node hugesize free / total 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.358 00:04:37.358 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.358 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.615 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:37.616 13:40:15 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:37.616 13:40:15 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:37.616 13:40:15 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.616 13:40:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:37.616 ************************************ 00:04:37.616 START TEST denied 00:04:37.616 ************************************ 00:04:37.616 13:40:15 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:37.616 13:40:15 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:37.616 13:40:15 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:37.616 13:40:15 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:37.616 13:40:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.616 13:40:15 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.985 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:38.985 13:40:16 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:38.985 13:40:16 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:38.985 13:40:16 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:38.985 13:40:16 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:38.985 13:40:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:38.985 13:40:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:38.985 13:40:16 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:38.985 13:40:16 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:38.985 13:40:16 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.985 13:40:16 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.512 00:04:41.512 real 0m3.762s 00:04:41.512 user 0m1.105s 00:04:41.513 sys 0m1.770s 00:04:41.513 13:40:19 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.513 13:40:19 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:41.513 ************************************ 00:04:41.513 END TEST denied 00:04:41.513 ************************************ 00:04:41.513 13:40:19 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:41.513 13:40:19 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.513 13:40:19 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.513 13:40:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:41.513 ************************************ 00:04:41.513 START TEST allowed 00:04:41.513 ************************************ 00:04:41.513 13:40:19 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:41.513 13:40:19 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:41.513 13:40:19 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:41.513 13:40:19 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:41.513 13:40:19 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.513 13:40:19 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.043 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:44.043 13:40:21 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:44.043 13:40:21 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:44.043 13:40:21 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:44.043 13:40:21 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.043 13:40:21 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.422 00:04:45.422 real 0m3.765s 00:04:45.422 user 0m0.997s 00:04:45.422 sys 0m1.593s 00:04:45.422 13:40:23 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.422 13:40:23 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:45.422 ************************************ 00:04:45.422 END TEST allowed 00:04:45.422 ************************************ 00:04:45.422 00:04:45.422 real 0m10.261s 00:04:45.422 user 0m3.210s 00:04:45.422 sys 0m5.055s 00:04:45.423 13:40:23 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.423 13:40:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:45.423 ************************************ 00:04:45.423 END TEST acl 00:04:45.423 ************************************ 00:04:45.423 13:40:23 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:45.423 13:40:23 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.423 13:40:23 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.423 13:40:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.423 ************************************ 00:04:45.423 START TEST hugepages 00:04:45.423 ************************************ 00:04:45.423 13:40:23 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:45.423 * Looking for test storage... 00:04:45.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 41866876 kB' 'MemAvailable: 45348108 kB' 'Buffers: 2704 kB' 'Cached: 12210756 kB' 'SwapCached: 0 kB' 'Active: 9163192 kB' 'Inactive: 3491992 kB' 'Active(anon): 8776328 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444960 kB' 'Mapped: 189396 kB' 'Shmem: 8334604 kB' 'KReclaimable: 192428 kB' 'Slab: 541104 kB' 'SReclaimable: 192428 kB' 'SUnreclaim: 348676 kB' 'KernelStack: 12752 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 9883824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.423 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:45.424 13:40:23 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:45.424 13:40:23 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.424 13:40:23 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.425 13:40:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.425 ************************************ 00:04:45.425 START TEST default_setup 00:04:45.425 ************************************ 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.425 13:40:23 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.801 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:46.801 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:46.801 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:46.801 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:46.801 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:46.801 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:46.801 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:46.801 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:46.801 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:46.801 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:46.801 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:46.801 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:46.801 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:46.801 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:46.801 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:46.801 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:47.745 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43981924 kB' 'MemAvailable: 47463156 kB' 'Buffers: 2704 kB' 'Cached: 12210844 kB' 'SwapCached: 0 kB' 'Active: 9180660 kB' 'Inactive: 3491992 kB' 'Active(anon): 8793796 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462276 kB' 'Mapped: 189380 kB' 'Shmem: 8334692 kB' 'KReclaimable: 192428 kB' 'Slab: 540208 kB' 'SReclaimable: 192428 kB' 'SUnreclaim: 347780 kB' 'KernelStack: 12736 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9900808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.745 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43984760 kB' 'MemAvailable: 47465992 kB' 'Buffers: 2704 kB' 'Cached: 12210844 kB' 'SwapCached: 0 kB' 'Active: 9180460 kB' 'Inactive: 3491992 kB' 'Active(anon): 8793596 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462124 kB' 'Mapped: 189372 kB' 'Shmem: 8334692 kB' 'KReclaimable: 192428 kB' 'Slab: 540172 kB' 'SReclaimable: 192428 kB' 'SUnreclaim: 347744 kB' 'KernelStack: 12736 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9900824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.746 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.747 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43985544 kB' 'MemAvailable: 47466776 kB' 'Buffers: 2704 kB' 'Cached: 12210860 kB' 'SwapCached: 0 kB' 'Active: 9180388 kB' 'Inactive: 3491992 kB' 'Active(anon): 8793524 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462012 kB' 'Mapped: 189372 kB' 'Shmem: 8334708 kB' 'KReclaimable: 192428 kB' 'Slab: 540292 kB' 'SReclaimable: 192428 kB' 'SUnreclaim: 347864 kB' 'KernelStack: 12672 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9900848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.748 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.749 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.750 nr_hugepages=1024 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.750 resv_hugepages=0 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.750 surplus_hugepages=0 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.750 anon_hugepages=0 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43985444 kB' 'MemAvailable: 47466676 kB' 'Buffers: 2704 kB' 'Cached: 12210884 kB' 'SwapCached: 0 kB' 'Active: 9180296 kB' 'Inactive: 3491992 kB' 'Active(anon): 8793432 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461900 kB' 'Mapped: 189372 kB' 'Shmem: 8334732 kB' 'KReclaimable: 192428 kB' 'Slab: 540284 kB' 'SReclaimable: 192428 kB' 'SUnreclaim: 347856 kB' 'KernelStack: 12656 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9900868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.750 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.751 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19638832 kB' 'MemUsed: 13238108 kB' 'SwapCached: 0 kB' 'Active: 6749300 kB' 'Inactive: 3263392 kB' 'Active(anon): 6564464 kB' 'Inactive(anon): 0 kB' 'Active(file): 184836 kB' 'Inactive(file): 3263392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9784496 kB' 'Mapped: 107592 kB' 'AnonPages: 231332 kB' 'Shmem: 6336268 kB' 'KernelStack: 7336 kB' 'PageTables: 4940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114084 kB' 'Slab: 295228 kB' 'SReclaimable: 114084 kB' 'SUnreclaim: 181144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.752 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:47.753 13:40:25 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:48.012 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.012 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.012 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.012 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.012 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.012 node0=1024 expecting 1024 00:04:48.012 13:40:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.012 00:04:48.012 real 0m2.490s 00:04:48.012 user 0m0.704s 00:04:48.012 sys 0m0.885s 00:04:48.012 13:40:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:48.012 13:40:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:48.012 ************************************ 00:04:48.012 END TEST default_setup 00:04:48.012 ************************************ 00:04:48.012 13:40:25 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:48.012 13:40:25 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.012 13:40:25 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.012 13:40:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.012 ************************************ 00:04:48.012 START TEST per_node_1G_alloc 00:04:48.012 ************************************ 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.012 13:40:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.950 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:48.950 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:48.950 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:48.950 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:48.950 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:48.950 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:48.950 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:48.950 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:48.950 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:48.950 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:48.950 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:48.950 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:48.950 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:48.950 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:48.950 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:48.950 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:48.950 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43980308 kB' 'MemAvailable: 47461532 kB' 'Buffers: 2704 kB' 'Cached: 12210960 kB' 'SwapCached: 0 kB' 'Active: 9180784 kB' 'Inactive: 3491992 kB' 'Active(anon): 8793920 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462456 kB' 'Mapped: 189436 kB' 'Shmem: 8334808 kB' 'KReclaimable: 192412 kB' 'Slab: 540104 kB' 'SReclaimable: 192412 kB' 'SUnreclaim: 347692 kB' 'KernelStack: 12688 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9901188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.216 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.217 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43981512 kB' 'MemAvailable: 47462736 kB' 'Buffers: 2704 kB' 'Cached: 12210964 kB' 'SwapCached: 0 kB' 'Active: 9181164 kB' 'Inactive: 3491992 kB' 'Active(anon): 8794300 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462380 kB' 'Mapped: 189388 kB' 'Shmem: 8334812 kB' 'KReclaimable: 192412 kB' 'Slab: 540160 kB' 'SReclaimable: 192412 kB' 'SUnreclaim: 347748 kB' 'KernelStack: 12720 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9901204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.218 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43981512 kB' 'MemAvailable: 47462736 kB' 'Buffers: 2704 kB' 'Cached: 12210964 kB' 'SwapCached: 0 kB' 'Active: 9180968 kB' 'Inactive: 3491992 kB' 'Active(anon): 8794104 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462204 kB' 'Mapped: 189388 kB' 'Shmem: 8334812 kB' 'KReclaimable: 192412 kB' 'Slab: 540140 kB' 'SReclaimable: 192412 kB' 'SUnreclaim: 347728 kB' 'KernelStack: 12704 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9901228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.219 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.220 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:49.221 nr_hugepages=1024 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.221 resv_hugepages=0 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.221 surplus_hugepages=0 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.221 anon_hugepages=0 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43982628 kB' 'MemAvailable: 47463852 kB' 'Buffers: 2704 kB' 'Cached: 12210968 kB' 'SwapCached: 0 kB' 'Active: 9180268 kB' 'Inactive: 3491992 kB' 'Active(anon): 8793404 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461908 kB' 'Mapped: 189388 kB' 'Shmem: 8334816 kB' 'KReclaimable: 192412 kB' 'Slab: 540140 kB' 'SReclaimable: 192412 kB' 'SUnreclaim: 347728 kB' 'KernelStack: 12720 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9901252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.221 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.222 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20679608 kB' 'MemUsed: 12197332 kB' 'SwapCached: 0 kB' 'Active: 6749744 kB' 'Inactive: 3263392 kB' 'Active(anon): 6564908 kB' 'Inactive(anon): 0 kB' 'Active(file): 184836 kB' 'Inactive(file): 3263392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9784568 kB' 'Mapped: 107608 kB' 'AnonPages: 231760 kB' 'Shmem: 6336340 kB' 'KernelStack: 7352 kB' 'PageTables: 5036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114068 kB' 'Slab: 295112 kB' 'SReclaimable: 114068 kB' 'SUnreclaim: 181044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.223 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.224 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23303636 kB' 'MemUsed: 4361116 kB' 'SwapCached: 0 kB' 'Active: 2430764 kB' 'Inactive: 228600 kB' 'Active(anon): 2228736 kB' 'Inactive(anon): 0 kB' 'Active(file): 202028 kB' 'Inactive(file): 228600 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2429164 kB' 'Mapped: 81780 kB' 'AnonPages: 230324 kB' 'Shmem: 1998536 kB' 'KernelStack: 5336 kB' 'PageTables: 2772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78344 kB' 'Slab: 245012 kB' 'SReclaimable: 78344 kB' 'SUnreclaim: 166668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:49.226 node0=512 expecting 512 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:49.226 node1=512 expecting 512 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:49.226 00:04:49.226 real 0m1.404s 00:04:49.226 user 0m0.603s 00:04:49.226 sys 0m0.761s 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.226 13:40:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:49.226 ************************************ 00:04:49.226 END TEST per_node_1G_alloc 00:04:49.226 ************************************ 00:04:49.226 13:40:27 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:49.226 13:40:27 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.226 13:40:27 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.226 13:40:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.487 ************************************ 00:04:49.487 START TEST even_2G_alloc 00:04:49.487 ************************************ 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.487 13:40:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.426 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.426 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:50.426 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.426 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.426 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.426 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.426 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.426 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.426 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.426 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.426 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.426 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.426 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.426 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.426 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.426 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.426 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43989968 kB' 'MemAvailable: 47471192 kB' 'Buffers: 2704 kB' 'Cached: 12211104 kB' 'SwapCached: 0 kB' 'Active: 9180960 kB' 'Inactive: 3491992 kB' 'Active(anon): 8794096 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462408 kB' 'Mapped: 189480 kB' 'Shmem: 8334952 kB' 'KReclaimable: 192412 kB' 'Slab: 540468 kB' 'SReclaimable: 192412 kB' 'SUnreclaim: 348056 kB' 'KernelStack: 12688 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9901488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.426 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.427 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.427 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.691 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43990128 kB' 'MemAvailable: 47471352 kB' 'Buffers: 2704 kB' 'Cached: 12211108 kB' 'SwapCached: 0 kB' 'Active: 9180892 kB' 'Inactive: 3491992 kB' 'Active(anon): 8794028 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462360 kB' 'Mapped: 189476 kB' 'Shmem: 8334956 kB' 'KReclaimable: 192412 kB' 'Slab: 540468 kB' 'SReclaimable: 192412 kB' 'SUnreclaim: 348056 kB' 'KernelStack: 12752 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9901504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.692 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43990712 kB' 'MemAvailable: 47471936 kB' 'Buffers: 2704 kB' 'Cached: 12211124 kB' 'SwapCached: 0 kB' 'Active: 9180772 kB' 'Inactive: 3491992 kB' 'Active(anon): 8793908 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462200 kB' 'Mapped: 189400 kB' 'Shmem: 8334972 kB' 'KReclaimable: 192412 kB' 'Slab: 540436 kB' 'SReclaimable: 192412 kB' 'SUnreclaim: 348024 kB' 'KernelStack: 12768 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9901528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.695 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.696 nr_hugepages=1024 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.696 resv_hugepages=0 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.696 surplus_hugepages=0 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.696 anon_hugepages=0 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43991000 kB' 'MemAvailable: 47472224 kB' 'Buffers: 2704 kB' 'Cached: 12211144 kB' 'SwapCached: 0 kB' 'Active: 9180796 kB' 'Inactive: 3491992 kB' 'Active(anon): 8793932 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462200 kB' 'Mapped: 189400 kB' 'Shmem: 8334992 kB' 'KReclaimable: 192412 kB' 'Slab: 540436 kB' 'SReclaimable: 192412 kB' 'SUnreclaim: 348024 kB' 'KernelStack: 12768 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9901548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.696 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.697 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20691580 kB' 'MemUsed: 12185360 kB' 'SwapCached: 0 kB' 'Active: 6749784 kB' 'Inactive: 3263392 kB' 'Active(anon): 6564948 kB' 'Inactive(anon): 0 kB' 'Active(file): 184836 kB' 'Inactive(file): 3263392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9784688 kB' 'Mapped: 107620 kB' 'AnonPages: 231676 kB' 'Shmem: 6336460 kB' 'KernelStack: 7368 kB' 'PageTables: 4984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114068 kB' 'Slab: 295228 kB' 'SReclaimable: 114068 kB' 'SUnreclaim: 181160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.698 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23299420 kB' 'MemUsed: 4365332 kB' 'SwapCached: 0 kB' 'Active: 2431020 kB' 'Inactive: 228600 kB' 'Active(anon): 2228992 kB' 'Inactive(anon): 0 kB' 'Active(file): 202028 kB' 'Inactive(file): 228600 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2429184 kB' 'Mapped: 81780 kB' 'AnonPages: 230524 kB' 'Shmem: 1998556 kB' 'KernelStack: 5400 kB' 'PageTables: 2816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78344 kB' 'Slab: 245208 kB' 'SReclaimable: 78344 kB' 'SUnreclaim: 166864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.699 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:50.700 node0=512 expecting 512 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:50.700 node1=512 expecting 512 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:50.700 00:04:50.700 real 0m1.330s 00:04:50.700 user 0m0.564s 00:04:50.700 sys 0m0.727s 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.700 13:40:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:50.700 ************************************ 00:04:50.700 END TEST even_2G_alloc 00:04:50.700 ************************************ 00:04:50.700 13:40:28 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:50.700 13:40:28 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.700 13:40:28 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.700 13:40:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:50.701 ************************************ 00:04:50.701 START TEST odd_alloc 00:04:50.701 ************************************ 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.701 13:40:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.083 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.083 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:52.083 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.083 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.083 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.083 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.083 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.083 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.083 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.083 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.083 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.083 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.083 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.083 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.083 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.083 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.083 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.083 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43978572 kB' 'MemAvailable: 47459744 kB' 'Buffers: 2704 kB' 'Cached: 12211232 kB' 'SwapCached: 0 kB' 'Active: 9177660 kB' 'Inactive: 3491992 kB' 'Active(anon): 8790796 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458888 kB' 'Mapped: 188528 kB' 'Shmem: 8335080 kB' 'KReclaimable: 192308 kB' 'Slab: 540068 kB' 'SReclaimable: 192308 kB' 'SUnreclaim: 347760 kB' 'KernelStack: 12672 kB' 'PageTables: 7384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 9886272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.084 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43984356 kB' 'MemAvailable: 47465528 kB' 'Buffers: 2704 kB' 'Cached: 12211232 kB' 'SwapCached: 0 kB' 'Active: 9177956 kB' 'Inactive: 3491992 kB' 'Active(anon): 8791092 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459188 kB' 'Mapped: 188528 kB' 'Shmem: 8335080 kB' 'KReclaimable: 192308 kB' 'Slab: 540064 kB' 'SReclaimable: 192308 kB' 'SUnreclaim: 347756 kB' 'KernelStack: 12640 kB' 'PageTables: 7280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 9886288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.085 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.086 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43983348 kB' 'MemAvailable: 47464520 kB' 'Buffers: 2704 kB' 'Cached: 12211252 kB' 'SwapCached: 0 kB' 'Active: 9177556 kB' 'Inactive: 3491992 kB' 'Active(anon): 8790692 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458752 kB' 'Mapped: 188464 kB' 'Shmem: 8335100 kB' 'KReclaimable: 192308 kB' 'Slab: 540064 kB' 'SReclaimable: 192308 kB' 'SUnreclaim: 347756 kB' 'KernelStack: 12672 kB' 'PageTables: 7356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 9886308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.087 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:52.088 nr_hugepages=1025 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.088 resv_hugepages=0 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.088 surplus_hugepages=0 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.088 anon_hugepages=0 00:04:52.088 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43983760 kB' 'MemAvailable: 47464932 kB' 'Buffers: 2704 kB' 'Cached: 12211272 kB' 'SwapCached: 0 kB' 'Active: 9177504 kB' 'Inactive: 3491992 kB' 'Active(anon): 8790640 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458684 kB' 'Mapped: 188464 kB' 'Shmem: 8335120 kB' 'KReclaimable: 192308 kB' 'Slab: 540100 kB' 'SReclaimable: 192308 kB' 'SUnreclaim: 347792 kB' 'KernelStack: 12656 kB' 'PageTables: 7324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 9886332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.089 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20687580 kB' 'MemUsed: 12189360 kB' 'SwapCached: 0 kB' 'Active: 6749368 kB' 'Inactive: 3263392 kB' 'Active(anon): 6564532 kB' 'Inactive(anon): 0 kB' 'Active(file): 184836 kB' 'Inactive(file): 3263392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9784804 kB' 'Mapped: 106904 kB' 'AnonPages: 231124 kB' 'Shmem: 6336576 kB' 'KernelStack: 7272 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114060 kB' 'Slab: 295008 kB' 'SReclaimable: 114060 kB' 'SUnreclaim: 180948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.090 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.091 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23296608 kB' 'MemUsed: 4368144 kB' 'SwapCached: 0 kB' 'Active: 2428068 kB' 'Inactive: 228600 kB' 'Active(anon): 2226040 kB' 'Inactive(anon): 0 kB' 'Active(file): 202028 kB' 'Inactive(file): 228600 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2429192 kB' 'Mapped: 81560 kB' 'AnonPages: 227516 kB' 'Shmem: 1998564 kB' 'KernelStack: 5368 kB' 'PageTables: 2656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78248 kB' 'Slab: 245092 kB' 'SReclaimable: 78248 kB' 'SUnreclaim: 166844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.092 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:52.093 node0=512 expecting 513 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:52.093 node1=513 expecting 512 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:52.093 00:04:52.093 real 0m1.373s 00:04:52.093 user 0m0.580s 00:04:52.093 sys 0m0.754s 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.093 13:40:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.093 ************************************ 00:04:52.093 END TEST odd_alloc 00:04:52.093 ************************************ 00:04:52.093 13:40:29 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:52.093 13:40:29 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.093 13:40:29 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.093 13:40:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.093 ************************************ 00:04:52.093 START TEST custom_alloc 00:04:52.093 ************************************ 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:52.093 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.094 13:40:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.479 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:53.479 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.479 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.479 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.479 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.479 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.479 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.479 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.479 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.479 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.479 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.479 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.479 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.479 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.479 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.479 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.479 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42920620 kB' 'MemAvailable: 46401760 kB' 'Buffers: 2704 kB' 'Cached: 12211364 kB' 'SwapCached: 0 kB' 'Active: 9178792 kB' 'Inactive: 3491992 kB' 'Active(anon): 8791928 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460456 kB' 'Mapped: 188496 kB' 'Shmem: 8335212 kB' 'KReclaimable: 192244 kB' 'Slab: 539956 kB' 'SReclaimable: 192244 kB' 'SUnreclaim: 347712 kB' 'KernelStack: 12720 kB' 'PageTables: 7500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 9886696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.479 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.480 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42920660 kB' 'MemAvailable: 46401800 kB' 'Buffers: 2704 kB' 'Cached: 12211364 kB' 'SwapCached: 0 kB' 'Active: 9178624 kB' 'Inactive: 3491992 kB' 'Active(anon): 8791760 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459876 kB' 'Mapped: 188560 kB' 'Shmem: 8335212 kB' 'KReclaimable: 192244 kB' 'Slab: 539896 kB' 'SReclaimable: 192244 kB' 'SUnreclaim: 347652 kB' 'KernelStack: 12704 kB' 'PageTables: 7416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 9886712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.481 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.482 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42920660 kB' 'MemAvailable: 46401800 kB' 'Buffers: 2704 kB' 'Cached: 12211388 kB' 'SwapCached: 0 kB' 'Active: 9178540 kB' 'Inactive: 3491992 kB' 'Active(anon): 8791676 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459752 kB' 'Mapped: 188480 kB' 'Shmem: 8335236 kB' 'KReclaimable: 192244 kB' 'Slab: 539892 kB' 'SReclaimable: 192244 kB' 'SUnreclaim: 347648 kB' 'KernelStack: 12736 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 9886736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.483 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:53.484 nr_hugepages=1536 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.484 resv_hugepages=0 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.484 surplus_hugepages=0 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.484 anon_hugepages=0 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.484 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42920660 kB' 'MemAvailable: 46401800 kB' 'Buffers: 2704 kB' 'Cached: 12211408 kB' 'SwapCached: 0 kB' 'Active: 9178564 kB' 'Inactive: 3491992 kB' 'Active(anon): 8791700 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459756 kB' 'Mapped: 188480 kB' 'Shmem: 8335256 kB' 'KReclaimable: 192244 kB' 'Slab: 539892 kB' 'SReclaimable: 192244 kB' 'SUnreclaim: 347648 kB' 'KernelStack: 12736 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 9886756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.485 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20678404 kB' 'MemUsed: 12198536 kB' 'SwapCached: 0 kB' 'Active: 6750524 kB' 'Inactive: 3263392 kB' 'Active(anon): 6565688 kB' 'Inactive(anon): 0 kB' 'Active(file): 184836 kB' 'Inactive(file): 3263392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9784928 kB' 'Mapped: 106920 kB' 'AnonPages: 232132 kB' 'Shmem: 6336700 kB' 'KernelStack: 7336 kB' 'PageTables: 4760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114060 kB' 'Slab: 294976 kB' 'SReclaimable: 114060 kB' 'SUnreclaim: 180916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.486 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.487 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22242648 kB' 'MemUsed: 5422104 kB' 'SwapCached: 0 kB' 'Active: 2428068 kB' 'Inactive: 228600 kB' 'Active(anon): 2226040 kB' 'Inactive(anon): 0 kB' 'Active(file): 202028 kB' 'Inactive(file): 228600 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2429204 kB' 'Mapped: 81560 kB' 'AnonPages: 227588 kB' 'Shmem: 1998576 kB' 'KernelStack: 5384 kB' 'PageTables: 2652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 78184 kB' 'Slab: 244908 kB' 'SReclaimable: 78184 kB' 'SUnreclaim: 166724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.488 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:53.489 node0=512 expecting 512 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:53.489 node1=1024 expecting 1024 00:04:53.489 13:40:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:53.748 00:04:53.748 real 0m1.453s 00:04:53.748 user 0m0.625s 00:04:53.748 sys 0m0.792s 00:04:53.748 13:40:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.748 13:40:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:53.748 ************************************ 00:04:53.748 END TEST custom_alloc 00:04:53.749 ************************************ 00:04:53.749 13:40:31 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:53.749 13:40:31 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.749 13:40:31 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.749 13:40:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.749 ************************************ 00:04:53.749 START TEST no_shrink_alloc 00:04:53.749 ************************************ 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.749 13:40:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.719 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.719 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.719 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.719 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.719 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.719 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.719 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.719 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.719 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.719 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.719 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.719 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.719 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.719 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.719 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.719 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.719 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.982 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43944120 kB' 'MemAvailable: 47425256 kB' 'Buffers: 2704 kB' 'Cached: 12211492 kB' 'SwapCached: 0 kB' 'Active: 9178956 kB' 'Inactive: 3491992 kB' 'Active(anon): 8792092 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460048 kB' 'Mapped: 188636 kB' 'Shmem: 8335340 kB' 'KReclaimable: 192236 kB' 'Slab: 539660 kB' 'SReclaimable: 192236 kB' 'SUnreclaim: 347424 kB' 'KernelStack: 12720 kB' 'PageTables: 7400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9886820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.983 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43944120 kB' 'MemAvailable: 47425256 kB' 'Buffers: 2704 kB' 'Cached: 12211496 kB' 'SwapCached: 0 kB' 'Active: 9178876 kB' 'Inactive: 3491992 kB' 'Active(anon): 8792012 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459988 kB' 'Mapped: 188572 kB' 'Shmem: 8335344 kB' 'KReclaimable: 192236 kB' 'Slab: 539644 kB' 'SReclaimable: 192236 kB' 'SUnreclaim: 347408 kB' 'KernelStack: 12752 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9886840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.984 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.985 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43944548 kB' 'MemAvailable: 47425684 kB' 'Buffers: 2704 kB' 'Cached: 12211496 kB' 'SwapCached: 0 kB' 'Active: 9178468 kB' 'Inactive: 3491992 kB' 'Active(anon): 8791604 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459548 kB' 'Mapped: 188496 kB' 'Shmem: 8335344 kB' 'KReclaimable: 192236 kB' 'Slab: 539652 kB' 'SReclaimable: 192236 kB' 'SUnreclaim: 347416 kB' 'KernelStack: 12752 kB' 'PageTables: 7464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9886860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.986 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.987 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.988 nr_hugepages=1024 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.988 resv_hugepages=0 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.988 surplus_hugepages=0 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.988 anon_hugepages=0 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43944044 kB' 'MemAvailable: 47425180 kB' 'Buffers: 2704 kB' 'Cached: 12211500 kB' 'SwapCached: 0 kB' 'Active: 9178644 kB' 'Inactive: 3491992 kB' 'Active(anon): 8791780 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459720 kB' 'Mapped: 188496 kB' 'Shmem: 8335348 kB' 'KReclaimable: 192236 kB' 'Slab: 539652 kB' 'SReclaimable: 192236 kB' 'SUnreclaim: 347416 kB' 'KernelStack: 12752 kB' 'PageTables: 7464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9886884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.988 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.989 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19641616 kB' 'MemUsed: 13235324 kB' 'SwapCached: 0 kB' 'Active: 6750088 kB' 'Inactive: 3263392 kB' 'Active(anon): 6565252 kB' 'Inactive(anon): 0 kB' 'Active(file): 184836 kB' 'Inactive(file): 3263392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9784976 kB' 'Mapped: 106936 kB' 'AnonPages: 231696 kB' 'Shmem: 6336748 kB' 'KernelStack: 7336 kB' 'PageTables: 4760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114052 kB' 'Slab: 294868 kB' 'SReclaimable: 114052 kB' 'SUnreclaim: 180816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.991 node0=1024 expecting 1024 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.991 13:40:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.372 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.372 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.372 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.372 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.372 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.372 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.372 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.372 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.372 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.372 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.372 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.372 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.372 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.372 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.372 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.372 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.372 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.372 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43934564 kB' 'MemAvailable: 47415700 kB' 'Buffers: 2704 kB' 'Cached: 12211600 kB' 'SwapCached: 0 kB' 'Active: 9179240 kB' 'Inactive: 3491992 kB' 'Active(anon): 8792376 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460156 kB' 'Mapped: 188588 kB' 'Shmem: 8335448 kB' 'KReclaimable: 192236 kB' 'Slab: 539576 kB' 'SReclaimable: 192236 kB' 'SUnreclaim: 347340 kB' 'KernelStack: 12736 kB' 'PageTables: 7420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9887256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.372 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43934976 kB' 'MemAvailable: 47416112 kB' 'Buffers: 2704 kB' 'Cached: 12211604 kB' 'SwapCached: 0 kB' 'Active: 9179364 kB' 'Inactive: 3491992 kB' 'Active(anon): 8792500 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460292 kB' 'Mapped: 188504 kB' 'Shmem: 8335452 kB' 'KReclaimable: 192236 kB' 'Slab: 539568 kB' 'SReclaimable: 192236 kB' 'SUnreclaim: 347332 kB' 'KernelStack: 12672 kB' 'PageTables: 7220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9887276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.373 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43935424 kB' 'MemAvailable: 47416560 kB' 'Buffers: 2704 kB' 'Cached: 12211620 kB' 'SwapCached: 0 kB' 'Active: 9179004 kB' 'Inactive: 3491992 kB' 'Active(anon): 8792140 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459876 kB' 'Mapped: 188504 kB' 'Shmem: 8335468 kB' 'KReclaimable: 192236 kB' 'Slab: 539664 kB' 'SReclaimable: 192236 kB' 'SUnreclaim: 347428 kB' 'KernelStack: 12736 kB' 'PageTables: 7416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9887296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.374 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.375 nr_hugepages=1024 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.375 resv_hugepages=0 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.375 surplus_hugepages=0 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.375 anon_hugepages=0 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43935484 kB' 'MemAvailable: 47416620 kB' 'Buffers: 2704 kB' 'Cached: 12211644 kB' 'SwapCached: 0 kB' 'Active: 9179044 kB' 'Inactive: 3491992 kB' 'Active(anon): 8792180 kB' 'Inactive(anon): 0 kB' 'Active(file): 386864 kB' 'Inactive(file): 3491992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459912 kB' 'Mapped: 188504 kB' 'Shmem: 8335492 kB' 'KReclaimable: 192236 kB' 'Slab: 539664 kB' 'SReclaimable: 192236 kB' 'SUnreclaim: 347428 kB' 'KernelStack: 12752 kB' 'PageTables: 7464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 9887320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1531484 kB' 'DirectMap2M: 13068288 kB' 'DirectMap1G: 54525952 kB' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.375 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19622952 kB' 'MemUsed: 13253988 kB' 'SwapCached: 0 kB' 'Active: 6750204 kB' 'Inactive: 3263392 kB' 'Active(anon): 6565368 kB' 'Inactive(anon): 0 kB' 'Active(file): 184836 kB' 'Inactive(file): 3263392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9785040 kB' 'Mapped: 106944 kB' 'AnonPages: 231764 kB' 'Shmem: 6336812 kB' 'KernelStack: 7352 kB' 'PageTables: 4808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114052 kB' 'Slab: 294804 kB' 'SReclaimable: 114052 kB' 'SUnreclaim: 180752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.376 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:56.377 node0=1024 expecting 1024 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:56.377 00:04:56.377 real 0m2.765s 00:04:56.377 user 0m1.109s 00:04:56.377 sys 0m1.578s 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.377 13:40:34 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:56.377 ************************************ 00:04:56.377 END TEST no_shrink_alloc 00:04:56.377 ************************************ 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:56.377 13:40:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:56.377 00:04:56.377 real 0m11.181s 00:04:56.377 user 0m4.354s 00:04:56.377 sys 0m5.717s 00:04:56.377 13:40:34 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.377 13:40:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.377 ************************************ 00:04:56.377 END TEST hugepages 00:04:56.377 ************************************ 00:04:56.377 13:40:34 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:56.377 13:40:34 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.377 13:40:34 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.377 13:40:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:56.377 ************************************ 00:04:56.377 START TEST driver 00:04:56.377 ************************************ 00:04:56.377 13:40:34 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:56.635 * Looking for test storage... 00:04:56.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:56.635 13:40:34 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:56.635 13:40:34 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:56.635 13:40:34 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.172 13:40:36 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:59.172 13:40:36 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.172 13:40:36 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.172 13:40:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:59.172 ************************************ 00:04:59.172 START TEST guess_driver 00:04:59.172 ************************************ 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:59.172 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:59.172 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:59.172 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:59.172 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:59.172 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:59.172 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:59.172 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:59.172 Looking for driver=vfio-pci 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.172 13:40:36 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.109 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.045 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:01.045 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:01.045 13:40:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.305 13:40:39 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:01.305 13:40:39 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:01.305 13:40:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.305 13:40:39 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.839 00:05:03.839 real 0m4.576s 00:05:03.839 user 0m0.990s 00:05:03.839 sys 0m1.731s 00:05:03.839 13:40:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.839 13:40:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:03.839 ************************************ 00:05:03.839 END TEST guess_driver 00:05:03.839 ************************************ 00:05:03.839 00:05:03.839 real 0m7.139s 00:05:03.839 user 0m1.532s 00:05:03.839 sys 0m2.771s 00:05:03.839 13:40:41 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.839 13:40:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:03.839 ************************************ 00:05:03.839 END TEST driver 00:05:03.839 ************************************ 00:05:03.839 13:40:41 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:03.839 13:40:41 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.839 13:40:41 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.839 13:40:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:03.839 ************************************ 00:05:03.839 START TEST devices 00:05:03.839 ************************************ 00:05:03.839 13:40:41 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:03.839 * Looking for test storage... 00:05:03.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:03.839 13:40:41 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:03.839 13:40:41 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:03.839 13:40:41 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.839 13:40:41 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:05.219 13:40:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:05.219 13:40:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:05.219 13:40:42 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:05.219 13:40:42 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:05.219 13:40:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:05.219 13:40:42 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:05.219 13:40:42 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:05.219 13:40:42 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:05.219 13:40:42 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:05.219 13:40:42 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:05.219 No valid GPT data, bailing 00:05:05.219 13:40:42 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:05.219 13:40:42 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:05.219 13:40:42 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:05.219 13:40:42 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:05.219 13:40:42 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:05.219 13:40:42 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:05.219 13:40:42 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:05.219 13:40:42 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.219 13:40:42 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.219 13:40:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:05.219 ************************************ 00:05:05.219 START TEST nvme_mount 00:05:05.219 ************************************ 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:05.219 13:40:43 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:06.157 Creating new GPT entries in memory. 00:05:06.157 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:06.157 other utilities. 00:05:06.157 13:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:06.157 13:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.157 13:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:06.157 13:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:06.157 13:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:07.095 Creating new GPT entries in memory. 00:05:07.095 The operation has completed successfully. 00:05:07.095 13:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:07.095 13:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.095 13:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1312473 00:05:07.095 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.095 13:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:07.095 13:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.095 13:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:07.095 13:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.353 13:40:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:08.291 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:08.552 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.552 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.811 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:08.811 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:08.811 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.811 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.811 13:40:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.757 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.017 13:40:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.390 13:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.390 13:40:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.390 13:40:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:11.390 13:40:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:11.390 13:40:49 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:11.390 13:40:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.390 13:40:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:11.390 13:40:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:11.390 13:40:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:11.390 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:11.390 00:05:11.390 real 0m6.161s 00:05:11.390 user 0m1.300s 00:05:11.390 sys 0m2.431s 00:05:11.390 13:40:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.390 13:40:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:11.390 ************************************ 00:05:11.390 END TEST nvme_mount 00:05:11.390 ************************************ 00:05:11.390 13:40:49 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:11.391 13:40:49 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.391 13:40:49 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.391 13:40:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:11.391 ************************************ 00:05:11.391 START TEST dm_mount 00:05:11.391 ************************************ 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:11.391 13:40:49 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:12.326 Creating new GPT entries in memory. 00:05:12.327 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.327 other utilities. 00:05:12.327 13:40:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.327 13:40:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.327 13:40:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.327 13:40:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.327 13:40:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:13.707 Creating new GPT entries in memory. 00:05:13.707 The operation has completed successfully. 00:05:13.707 13:40:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:13.707 13:40:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.707 13:40:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:13.707 13:40:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:13.707 13:40:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:14.643 The operation has completed successfully. 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1314740 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.643 13:40:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.617 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.884 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.884 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.885 13:40:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.821 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:17.080 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:17.080 00:05:17.080 real 0m5.646s 00:05:17.080 user 0m0.952s 00:05:17.080 sys 0m1.550s 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.080 13:40:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:17.080 ************************************ 00:05:17.080 END TEST dm_mount 00:05:17.080 ************************************ 00:05:17.080 13:40:54 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:17.080 13:40:54 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:17.080 13:40:54 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.080 13:40:54 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.080 13:40:54 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:17.080 13:40:54 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.080 13:40:54 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.340 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:17.340 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:17.340 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:17.340 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:17.340 13:40:55 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:17.340 13:40:55 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.340 13:40:55 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:17.340 13:40:55 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.340 13:40:55 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:17.340 13:40:55 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.340 13:40:55 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:17.340 00:05:17.340 real 0m13.647s 00:05:17.340 user 0m2.874s 00:05:17.340 sys 0m4.963s 00:05:17.340 13:40:55 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.340 13:40:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:17.340 ************************************ 00:05:17.340 END TEST devices 00:05:17.340 ************************************ 00:05:17.340 00:05:17.340 real 0m42.463s 00:05:17.340 user 0m12.064s 00:05:17.340 sys 0m18.662s 00:05:17.340 13:40:55 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.340 13:40:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.340 ************************************ 00:05:17.340 END TEST setup.sh 00:05:17.340 ************************************ 00:05:17.340 13:40:55 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:18.274 Hugepages 00:05:18.274 node hugesize free / total 00:05:18.274 node0 1048576kB 0 / 0 00:05:18.274 node0 2048kB 2048 / 2048 00:05:18.274 node1 1048576kB 0 / 0 00:05:18.274 node1 2048kB 0 / 0 00:05:18.274 00:05:18.274 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.533 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:18.533 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:18.533 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:18.533 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:18.533 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:18.533 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:18.533 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:18.533 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:18.533 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:18.533 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:18.533 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:18.533 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:18.533 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:18.533 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:18.533 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:18.533 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:18.533 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:18.533 13:40:56 -- spdk/autotest.sh@130 -- # uname -s 00:05:18.533 13:40:56 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:18.533 13:40:56 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:18.533 13:40:56 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:19.908 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:19.909 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:19.909 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:19.909 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:19.909 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:19.909 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:19.909 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:19.909 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:19.909 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:19.909 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:19.909 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:19.909 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:19.909 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:19.909 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:19.909 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:19.909 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:20.478 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:20.738 13:40:58 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:21.676 13:40:59 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:21.676 13:40:59 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:21.676 13:40:59 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:21.676 13:40:59 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:21.676 13:40:59 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:21.676 13:40:59 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:21.676 13:40:59 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.676 13:40:59 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:21.676 13:40:59 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:21.935 13:40:59 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:21.935 13:40:59 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:21.935 13:40:59 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.871 Waiting for block devices as requested 00:05:22.871 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:23.130 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:23.130 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:23.130 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:23.389 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:23.389 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:23.389 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:23.389 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:23.647 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:23.647 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:23.647 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:23.647 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:23.905 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:23.905 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:23.905 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:23.905 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:24.164 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:24.164 13:41:02 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:24.164 13:41:02 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:24.164 13:41:02 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:24.164 13:41:02 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:24.164 13:41:02 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:24.164 13:41:02 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:24.164 13:41:02 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:24.164 13:41:02 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:24.164 13:41:02 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:24.164 13:41:02 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:24.164 13:41:02 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:24.164 13:41:02 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:24.164 13:41:02 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:24.164 13:41:02 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:24.164 13:41:02 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:24.164 13:41:02 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:24.164 13:41:02 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:24.164 13:41:02 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:24.164 13:41:02 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:24.164 13:41:02 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:24.164 13:41:02 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:24.164 13:41:02 -- common/autotest_common.sh@1553 -- # continue 00:05:24.164 13:41:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:24.164 13:41:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.164 13:41:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.164 13:41:02 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:24.164 13:41:02 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:24.164 13:41:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.164 13:41:02 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:25.541 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:25.541 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:25.541 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:25.541 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:25.541 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:25.541 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:25.541 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:25.541 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:25.541 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:25.541 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:25.541 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:25.541 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:25.541 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:25.541 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:25.541 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:25.541 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:26.478 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:26.478 13:41:04 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:26.478 13:41:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.478 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.478 13:41:04 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:26.478 13:41:04 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:26.478 13:41:04 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:26.478 13:41:04 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:26.478 13:41:04 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:26.478 13:41:04 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:26.478 13:41:04 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:26.478 13:41:04 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:26.478 13:41:04 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.478 13:41:04 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:26.478 13:41:04 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:26.478 13:41:04 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:26.478 13:41:04 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:26.478 13:41:04 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:26.478 13:41:04 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:26.478 13:41:04 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:26.478 13:41:04 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:26.478 13:41:04 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:26.478 13:41:04 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:26.478 13:41:04 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:26.478 13:41:04 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=1320023 00:05:26.479 13:41:04 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.479 13:41:04 -- common/autotest_common.sh@1594 -- # waitforlisten 1320023 00:05:26.479 13:41:04 -- common/autotest_common.sh@827 -- # '[' -z 1320023 ']' 00:05:26.479 13:41:04 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.479 13:41:04 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:26.479 13:41:04 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.479 13:41:04 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:26.479 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.738 [2024-07-14 13:41:04.473918] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:26.738 [2024-07-14 13:41:04.474004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320023 ] 00:05:26.738 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.738 [2024-07-14 13:41:04.547365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.738 [2024-07-14 13:41:04.646405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.997 13:41:04 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:26.997 13:41:04 -- common/autotest_common.sh@860 -- # return 0 00:05:26.997 13:41:04 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:26.997 13:41:04 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:26.997 13:41:04 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:30.292 nvme0n1 00:05:30.292 13:41:07 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:30.292 [2024-07-14 13:41:08.211900] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:30.292 [2024-07-14 13:41:08.211946] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:30.292 request: 00:05:30.292 { 00:05:30.292 "nvme_ctrlr_name": "nvme0", 00:05:30.292 "password": "test", 00:05:30.292 "method": "bdev_nvme_opal_revert", 00:05:30.292 "req_id": 1 00:05:30.292 } 00:05:30.292 Got JSON-RPC error response 00:05:30.292 response: 00:05:30.292 { 00:05:30.292 "code": -32603, 00:05:30.292 "message": "Internal error" 00:05:30.292 } 00:05:30.292 13:41:08 -- common/autotest_common.sh@1600 -- # true 00:05:30.292 13:41:08 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:30.292 13:41:08 -- common/autotest_common.sh@1604 -- # killprocess 1320023 00:05:30.292 13:41:08 -- common/autotest_common.sh@946 -- # '[' -z 1320023 ']' 00:05:30.292 13:41:08 -- common/autotest_common.sh@950 -- # kill -0 1320023 00:05:30.292 13:41:08 -- common/autotest_common.sh@951 -- # uname 00:05:30.292 13:41:08 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:30.292 13:41:08 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1320023 00:05:30.292 13:41:08 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:30.292 13:41:08 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:30.292 13:41:08 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1320023' 00:05:30.292 killing process with pid 1320023 00:05:30.292 13:41:08 -- common/autotest_common.sh@965 -- # kill 1320023 00:05:30.292 13:41:08 -- common/autotest_common.sh@970 -- # wait 1320023 00:05:32.193 13:41:09 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:32.193 13:41:09 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:32.193 13:41:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:32.193 13:41:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:32.193 13:41:09 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:32.193 13:41:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:32.193 13:41:09 -- common/autotest_common.sh@10 -- # set +x 00:05:32.193 13:41:09 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:32.193 13:41:09 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:32.193 13:41:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.193 13:41:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.193 13:41:09 -- common/autotest_common.sh@10 -- # set +x 00:05:32.193 ************************************ 00:05:32.193 START TEST env 00:05:32.193 ************************************ 00:05:32.193 13:41:10 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:32.193 * Looking for test storage... 00:05:32.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:32.193 13:41:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:32.193 13:41:10 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.193 13:41:10 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.193 13:41:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.193 ************************************ 00:05:32.193 START TEST env_memory 00:05:32.193 ************************************ 00:05:32.193 13:41:10 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:32.193 00:05:32.193 00:05:32.193 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.193 http://cunit.sourceforge.net/ 00:05:32.193 00:05:32.193 00:05:32.193 Suite: memory 00:05:32.193 Test: alloc and free memory map ...[2024-07-14 13:41:10.106671] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:32.193 passed 00:05:32.193 Test: mem map translation ...[2024-07-14 13:41:10.128312] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:32.194 [2024-07-14 13:41:10.128334] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:32.194 [2024-07-14 13:41:10.128390] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:32.194 [2024-07-14 13:41:10.128402] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:32.194 passed 00:05:32.194 Test: mem map registration ...[2024-07-14 13:41:10.171244] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:32.194 [2024-07-14 13:41:10.171264] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:32.452 passed 00:05:32.452 Test: mem map adjacent registrations ...passed 00:05:32.452 00:05:32.452 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.452 suites 1 1 n/a 0 0 00:05:32.452 tests 4 4 4 0 0 00:05:32.452 asserts 152 152 152 0 n/a 00:05:32.452 00:05:32.452 Elapsed time = 0.146 seconds 00:05:32.452 00:05:32.452 real 0m0.154s 00:05:32.452 user 0m0.146s 00:05:32.452 sys 0m0.007s 00:05:32.452 13:41:10 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.452 13:41:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:32.452 ************************************ 00:05:32.452 END TEST env_memory 00:05:32.452 ************************************ 00:05:32.452 13:41:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:32.452 13:41:10 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.452 13:41:10 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.452 13:41:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.452 ************************************ 00:05:32.453 START TEST env_vtophys 00:05:32.453 ************************************ 00:05:32.453 13:41:10 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:32.453 EAL: lib.eal log level changed from notice to debug 00:05:32.453 EAL: Detected lcore 0 as core 0 on socket 0 00:05:32.453 EAL: Detected lcore 1 as core 1 on socket 0 00:05:32.453 EAL: Detected lcore 2 as core 2 on socket 0 00:05:32.453 EAL: Detected lcore 3 as core 3 on socket 0 00:05:32.453 EAL: Detected lcore 4 as core 4 on socket 0 00:05:32.453 EAL: Detected lcore 5 as core 5 on socket 0 00:05:32.453 EAL: Detected lcore 6 as core 8 on socket 0 00:05:32.453 EAL: Detected lcore 7 as core 9 on socket 0 00:05:32.453 EAL: Detected lcore 8 as core 10 on socket 0 00:05:32.453 EAL: Detected lcore 9 as core 11 on socket 0 00:05:32.453 EAL: Detected lcore 10 as core 12 on socket 0 00:05:32.453 EAL: Detected lcore 11 as core 13 on socket 0 00:05:32.453 EAL: Detected lcore 12 as core 0 on socket 1 00:05:32.453 EAL: Detected lcore 13 as core 1 on socket 1 00:05:32.453 EAL: Detected lcore 14 as core 2 on socket 1 00:05:32.453 EAL: Detected lcore 15 as core 3 on socket 1 00:05:32.453 EAL: Detected lcore 16 as core 4 on socket 1 00:05:32.453 EAL: Detected lcore 17 as core 5 on socket 1 00:05:32.453 EAL: Detected lcore 18 as core 8 on socket 1 00:05:32.453 EAL: Detected lcore 19 as core 9 on socket 1 00:05:32.453 EAL: Detected lcore 20 as core 10 on socket 1 00:05:32.453 EAL: Detected lcore 21 as core 11 on socket 1 00:05:32.453 EAL: Detected lcore 22 as core 12 on socket 1 00:05:32.453 EAL: Detected lcore 23 as core 13 on socket 1 00:05:32.453 EAL: Detected lcore 24 as core 0 on socket 0 00:05:32.453 EAL: Detected lcore 25 as core 1 on socket 0 00:05:32.453 EAL: Detected lcore 26 as core 2 on socket 0 00:05:32.453 EAL: Detected lcore 27 as core 3 on socket 0 00:05:32.453 EAL: Detected lcore 28 as core 4 on socket 0 00:05:32.453 EAL: Detected lcore 29 as core 5 on socket 0 00:05:32.453 EAL: Detected lcore 30 as core 8 on socket 0 00:05:32.453 EAL: Detected lcore 31 as core 9 on socket 0 00:05:32.453 EAL: Detected lcore 32 as core 10 on socket 0 00:05:32.453 EAL: Detected lcore 33 as core 11 on socket 0 00:05:32.453 EAL: Detected lcore 34 as core 12 on socket 0 00:05:32.453 EAL: Detected lcore 35 as core 13 on socket 0 00:05:32.453 EAL: Detected lcore 36 as core 0 on socket 1 00:05:32.453 EAL: Detected lcore 37 as core 1 on socket 1 00:05:32.453 EAL: Detected lcore 38 as core 2 on socket 1 00:05:32.453 EAL: Detected lcore 39 as core 3 on socket 1 00:05:32.453 EAL: Detected lcore 40 as core 4 on socket 1 00:05:32.453 EAL: Detected lcore 41 as core 5 on socket 1 00:05:32.453 EAL: Detected lcore 42 as core 8 on socket 1 00:05:32.453 EAL: Detected lcore 43 as core 9 on socket 1 00:05:32.453 EAL: Detected lcore 44 as core 10 on socket 1 00:05:32.453 EAL: Detected lcore 45 as core 11 on socket 1 00:05:32.453 EAL: Detected lcore 46 as core 12 on socket 1 00:05:32.453 EAL: Detected lcore 47 as core 13 on socket 1 00:05:32.453 EAL: Maximum logical cores by configuration: 128 00:05:32.453 EAL: Detected CPU lcores: 48 00:05:32.453 EAL: Detected NUMA nodes: 2 00:05:32.453 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:32.453 EAL: Detected shared linkage of DPDK 00:05:32.453 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:32.453 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:32.453 EAL: Registered [vdev] bus. 00:05:32.453 EAL: bus.vdev log level changed from disabled to notice 00:05:32.453 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:32.453 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:32.453 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:32.453 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:32.453 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:32.453 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:32.453 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:32.453 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:32.453 EAL: No shared files mode enabled, IPC will be disabled 00:05:32.453 EAL: No shared files mode enabled, IPC is disabled 00:05:32.453 EAL: Bus pci wants IOVA as 'DC' 00:05:32.453 EAL: Bus vdev wants IOVA as 'DC' 00:05:32.453 EAL: Buses did not request a specific IOVA mode. 00:05:32.453 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:32.453 EAL: Selected IOVA mode 'VA' 00:05:32.453 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.453 EAL: Probing VFIO support... 00:05:32.453 EAL: IOMMU type 1 (Type 1) is supported 00:05:32.453 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:32.453 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:32.453 EAL: VFIO support initialized 00:05:32.453 EAL: Ask a virtual area of 0x2e000 bytes 00:05:32.453 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:32.453 EAL: Setting up physically contiguous memory... 00:05:32.453 EAL: Setting maximum number of open files to 524288 00:05:32.453 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:32.453 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:32.453 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:32.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.453 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:32.453 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.453 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:32.453 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:32.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.453 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:32.453 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.453 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:32.453 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:32.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.453 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:32.453 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.453 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:32.453 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:32.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.453 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:32.453 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.453 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:32.453 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:32.453 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:32.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.453 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:32.453 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.453 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:32.453 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:32.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.453 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:32.453 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.453 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:32.453 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:32.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.453 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:32.453 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.453 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:32.453 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:32.453 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.453 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:32.453 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.453 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.453 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:32.453 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:32.453 EAL: Hugepages will be freed exactly as allocated. 00:05:32.453 EAL: No shared files mode enabled, IPC is disabled 00:05:32.453 EAL: No shared files mode enabled, IPC is disabled 00:05:32.453 EAL: TSC frequency is ~2700000 KHz 00:05:32.453 EAL: Main lcore 0 is ready (tid=7f88e51c4a00;cpuset=[0]) 00:05:32.453 EAL: Trying to obtain current memory policy. 00:05:32.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.453 EAL: Restoring previous memory policy: 0 00:05:32.453 EAL: request: mp_malloc_sync 00:05:32.453 EAL: No shared files mode enabled, IPC is disabled 00:05:32.453 EAL: Heap on socket 0 was expanded by 2MB 00:05:32.453 EAL: No shared files mode enabled, IPC is disabled 00:05:32.453 EAL: No shared files mode enabled, IPC is disabled 00:05:32.453 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:32.453 EAL: Mem event callback 'spdk:(nil)' registered 00:05:32.453 00:05:32.453 00:05:32.453 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.453 http://cunit.sourceforge.net/ 00:05:32.453 00:05:32.453 00:05:32.453 Suite: components_suite 00:05:32.453 Test: vtophys_malloc_test ...passed 00:05:32.453 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:32.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.453 EAL: Restoring previous memory policy: 4 00:05:32.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.453 EAL: request: mp_malloc_sync 00:05:32.453 EAL: No shared files mode enabled, IPC is disabled 00:05:32.453 EAL: Heap on socket 0 was expanded by 4MB 00:05:32.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.453 EAL: request: mp_malloc_sync 00:05:32.453 EAL: No shared files mode enabled, IPC is disabled 00:05:32.453 EAL: Heap on socket 0 was shrunk by 4MB 00:05:32.453 EAL: Trying to obtain current memory policy. 00:05:32.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.453 EAL: Restoring previous memory policy: 4 00:05:32.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.453 EAL: request: mp_malloc_sync 00:05:32.453 EAL: No shared files mode enabled, IPC is disabled 00:05:32.453 EAL: Heap on socket 0 was expanded by 6MB 00:05:32.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.453 EAL: request: mp_malloc_sync 00:05:32.453 EAL: No shared files mode enabled, IPC is disabled 00:05:32.453 EAL: Heap on socket 0 was shrunk by 6MB 00:05:32.453 EAL: Trying to obtain current memory policy. 00:05:32.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.453 EAL: Restoring previous memory policy: 4 00:05:32.453 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.453 EAL: request: mp_malloc_sync 00:05:32.453 EAL: No shared files mode enabled, IPC is disabled 00:05:32.453 EAL: Heap on socket 0 was expanded by 10MB 00:05:32.454 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.454 EAL: request: mp_malloc_sync 00:05:32.454 EAL: No shared files mode enabled, IPC is disabled 00:05:32.454 EAL: Heap on socket 0 was shrunk by 10MB 00:05:32.454 EAL: Trying to obtain current memory policy. 00:05:32.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.454 EAL: Restoring previous memory policy: 4 00:05:32.454 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.454 EAL: request: mp_malloc_sync 00:05:32.454 EAL: No shared files mode enabled, IPC is disabled 00:05:32.454 EAL: Heap on socket 0 was expanded by 18MB 00:05:32.454 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.454 EAL: request: mp_malloc_sync 00:05:32.454 EAL: No shared files mode enabled, IPC is disabled 00:05:32.454 EAL: Heap on socket 0 was shrunk by 18MB 00:05:32.454 EAL: Trying to obtain current memory policy. 00:05:32.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.454 EAL: Restoring previous memory policy: 4 00:05:32.454 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.454 EAL: request: mp_malloc_sync 00:05:32.454 EAL: No shared files mode enabled, IPC is disabled 00:05:32.454 EAL: Heap on socket 0 was expanded by 34MB 00:05:32.454 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.454 EAL: request: mp_malloc_sync 00:05:32.454 EAL: No shared files mode enabled, IPC is disabled 00:05:32.454 EAL: Heap on socket 0 was shrunk by 34MB 00:05:32.454 EAL: Trying to obtain current memory policy. 00:05:32.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.454 EAL: Restoring previous memory policy: 4 00:05:32.454 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.454 EAL: request: mp_malloc_sync 00:05:32.454 EAL: No shared files mode enabled, IPC is disabled 00:05:32.454 EAL: Heap on socket 0 was expanded by 66MB 00:05:32.454 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.454 EAL: request: mp_malloc_sync 00:05:32.454 EAL: No shared files mode enabled, IPC is disabled 00:05:32.454 EAL: Heap on socket 0 was shrunk by 66MB 00:05:32.454 EAL: Trying to obtain current memory policy. 00:05:32.454 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.712 EAL: Restoring previous memory policy: 4 00:05:32.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.712 EAL: request: mp_malloc_sync 00:05:32.712 EAL: No shared files mode enabled, IPC is disabled 00:05:32.712 EAL: Heap on socket 0 was expanded by 130MB 00:05:32.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.712 EAL: request: mp_malloc_sync 00:05:32.712 EAL: No shared files mode enabled, IPC is disabled 00:05:32.712 EAL: Heap on socket 0 was shrunk by 130MB 00:05:32.712 EAL: Trying to obtain current memory policy. 00:05:32.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.712 EAL: Restoring previous memory policy: 4 00:05:32.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.712 EAL: request: mp_malloc_sync 00:05:32.712 EAL: No shared files mode enabled, IPC is disabled 00:05:32.712 EAL: Heap on socket 0 was expanded by 258MB 00:05:32.712 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.712 EAL: request: mp_malloc_sync 00:05:32.712 EAL: No shared files mode enabled, IPC is disabled 00:05:32.712 EAL: Heap on socket 0 was shrunk by 258MB 00:05:32.712 EAL: Trying to obtain current memory policy. 00:05:32.712 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.970 EAL: Restoring previous memory policy: 4 00:05:32.970 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.970 EAL: request: mp_malloc_sync 00:05:32.970 EAL: No shared files mode enabled, IPC is disabled 00:05:32.970 EAL: Heap on socket 0 was expanded by 514MB 00:05:32.970 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.228 EAL: request: mp_malloc_sync 00:05:33.228 EAL: No shared files mode enabled, IPC is disabled 00:05:33.228 EAL: Heap on socket 0 was shrunk by 514MB 00:05:33.228 EAL: Trying to obtain current memory policy. 00:05:33.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.488 EAL: Restoring previous memory policy: 4 00:05:33.488 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.488 EAL: request: mp_malloc_sync 00:05:33.488 EAL: No shared files mode enabled, IPC is disabled 00:05:33.488 EAL: Heap on socket 0 was expanded by 1026MB 00:05:33.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.748 EAL: request: mp_malloc_sync 00:05:33.748 EAL: No shared files mode enabled, IPC is disabled 00:05:33.748 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:33.748 passed 00:05:33.748 00:05:33.748 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.748 suites 1 1 n/a 0 0 00:05:33.748 tests 2 2 2 0 0 00:05:33.748 asserts 497 497 497 0 n/a 00:05:33.748 00:05:33.748 Elapsed time = 1.346 seconds 00:05:33.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.748 EAL: request: mp_malloc_sync 00:05:33.748 EAL: No shared files mode enabled, IPC is disabled 00:05:33.748 EAL: Heap on socket 0 was shrunk by 2MB 00:05:33.748 EAL: No shared files mode enabled, IPC is disabled 00:05:33.748 EAL: No shared files mode enabled, IPC is disabled 00:05:33.748 EAL: No shared files mode enabled, IPC is disabled 00:05:33.748 00:05:33.748 real 0m1.455s 00:05:33.748 user 0m0.828s 00:05:33.748 sys 0m0.596s 00:05:33.748 13:41:11 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.748 13:41:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:33.748 ************************************ 00:05:33.748 END TEST env_vtophys 00:05:33.748 ************************************ 00:05:34.008 13:41:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:34.008 13:41:11 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.008 13:41:11 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.008 13:41:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.008 ************************************ 00:05:34.008 START TEST env_pci 00:05:34.008 ************************************ 00:05:34.008 13:41:11 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:34.008 00:05:34.008 00:05:34.008 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.008 http://cunit.sourceforge.net/ 00:05:34.008 00:05:34.008 00:05:34.008 Suite: pci 00:05:34.008 Test: pci_hook ...[2024-07-14 13:41:11.782863] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1320922 has claimed it 00:05:34.008 EAL: Cannot find device (10000:00:01.0) 00:05:34.008 EAL: Failed to attach device on primary process 00:05:34.008 passed 00:05:34.008 00:05:34.008 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.008 suites 1 1 n/a 0 0 00:05:34.008 tests 1 1 1 0 0 00:05:34.008 asserts 25 25 25 0 n/a 00:05:34.008 00:05:34.008 Elapsed time = 0.021 seconds 00:05:34.008 00:05:34.008 real 0m0.033s 00:05:34.008 user 0m0.010s 00:05:34.008 sys 0m0.022s 00:05:34.008 13:41:11 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.008 13:41:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:34.008 ************************************ 00:05:34.008 END TEST env_pci 00:05:34.008 ************************************ 00:05:34.008 13:41:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:34.008 13:41:11 env -- env/env.sh@15 -- # uname 00:05:34.008 13:41:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:34.008 13:41:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:34.008 13:41:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.008 13:41:11 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:34.008 13:41:11 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.008 13:41:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.008 ************************************ 00:05:34.008 START TEST env_dpdk_post_init 00:05:34.008 ************************************ 00:05:34.008 13:41:11 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.008 EAL: Detected CPU lcores: 48 00:05:34.008 EAL: Detected NUMA nodes: 2 00:05:34.008 EAL: Detected shared linkage of DPDK 00:05:34.008 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.008 EAL: Selected IOVA mode 'VA' 00:05:34.008 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.008 EAL: VFIO support initialized 00:05:34.008 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.008 EAL: Using IOMMU type 1 (Type 1) 00:05:34.008 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:34.008 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:34.008 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:34.268 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:35.205 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:38.551 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:38.551 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:38.551 Starting DPDK initialization... 00:05:38.551 Starting SPDK post initialization... 00:05:38.551 SPDK NVMe probe 00:05:38.551 Attaching to 0000:88:00.0 00:05:38.551 Attached to 0000:88:00.0 00:05:38.551 Cleaning up... 00:05:38.551 00:05:38.551 real 0m4.389s 00:05:38.551 user 0m3.265s 00:05:38.551 sys 0m0.185s 00:05:38.551 13:41:16 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.551 13:41:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.551 ************************************ 00:05:38.551 END TEST env_dpdk_post_init 00:05:38.551 ************************************ 00:05:38.551 13:41:16 env -- env/env.sh@26 -- # uname 00:05:38.551 13:41:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:38.551 13:41:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:38.551 13:41:16 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.551 13:41:16 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.551 13:41:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.551 ************************************ 00:05:38.551 START TEST env_mem_callbacks 00:05:38.551 ************************************ 00:05:38.551 13:41:16 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:38.551 EAL: Detected CPU lcores: 48 00:05:38.551 EAL: Detected NUMA nodes: 2 00:05:38.551 EAL: Detected shared linkage of DPDK 00:05:38.551 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.551 EAL: Selected IOVA mode 'VA' 00:05:38.551 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.551 EAL: VFIO support initialized 00:05:38.551 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.551 00:05:38.551 00:05:38.551 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.551 http://cunit.sourceforge.net/ 00:05:38.551 00:05:38.551 00:05:38.551 Suite: memory 00:05:38.551 Test: test ... 00:05:38.551 register 0x200000200000 2097152 00:05:38.551 malloc 3145728 00:05:38.551 register 0x200000400000 4194304 00:05:38.551 buf 0x200000500000 len 3145728 PASSED 00:05:38.551 malloc 64 00:05:38.551 buf 0x2000004fff40 len 64 PASSED 00:05:38.551 malloc 4194304 00:05:38.551 register 0x200000800000 6291456 00:05:38.551 buf 0x200000a00000 len 4194304 PASSED 00:05:38.551 free 0x200000500000 3145728 00:05:38.551 free 0x2000004fff40 64 00:05:38.551 unregister 0x200000400000 4194304 PASSED 00:05:38.551 free 0x200000a00000 4194304 00:05:38.551 unregister 0x200000800000 6291456 PASSED 00:05:38.551 malloc 8388608 00:05:38.551 register 0x200000400000 10485760 00:05:38.551 buf 0x200000600000 len 8388608 PASSED 00:05:38.551 free 0x200000600000 8388608 00:05:38.551 unregister 0x200000400000 10485760 PASSED 00:05:38.551 passed 00:05:38.551 00:05:38.551 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.551 suites 1 1 n/a 0 0 00:05:38.551 tests 1 1 1 0 0 00:05:38.551 asserts 15 15 15 0 n/a 00:05:38.551 00:05:38.551 Elapsed time = 0.005 seconds 00:05:38.551 00:05:38.551 real 0m0.048s 00:05:38.551 user 0m0.009s 00:05:38.551 sys 0m0.039s 00:05:38.551 13:41:16 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.551 13:41:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:38.552 ************************************ 00:05:38.552 END TEST env_mem_callbacks 00:05:38.552 ************************************ 00:05:38.552 00:05:38.552 real 0m6.353s 00:05:38.552 user 0m4.373s 00:05:38.552 sys 0m1.027s 00:05:38.552 13:41:16 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.552 13:41:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.552 ************************************ 00:05:38.552 END TEST env 00:05:38.552 ************************************ 00:05:38.552 13:41:16 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:38.552 13:41:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.552 13:41:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.552 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:05:38.552 ************************************ 00:05:38.552 START TEST rpc 00:05:38.552 ************************************ 00:05:38.552 13:41:16 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:38.552 * Looking for test storage... 00:05:38.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:38.552 13:41:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1321573 00:05:38.552 13:41:16 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:38.552 13:41:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.552 13:41:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1321573 00:05:38.552 13:41:16 rpc -- common/autotest_common.sh@827 -- # '[' -z 1321573 ']' 00:05:38.552 13:41:16 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.552 13:41:16 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:38.552 13:41:16 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.552 13:41:16 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:38.552 13:41:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.552 [2024-07-14 13:41:16.501402] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:38.552 [2024-07-14 13:41:16.501494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321573 ] 00:05:38.552 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.811 [2024-07-14 13:41:16.561386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.811 [2024-07-14 13:41:16.649526] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:38.811 [2024-07-14 13:41:16.649576] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1321573' to capture a snapshot of events at runtime. 00:05:38.811 [2024-07-14 13:41:16.649589] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:38.811 [2024-07-14 13:41:16.649600] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:38.811 [2024-07-14 13:41:16.649610] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1321573 for offline analysis/debug. 00:05:38.811 [2024-07-14 13:41:16.649661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.071 13:41:16 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:39.071 13:41:16 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:39.071 13:41:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:39.071 13:41:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:39.071 13:41:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:39.071 13:41:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:39.071 13:41:16 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.071 13:41:16 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.071 13:41:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.071 ************************************ 00:05:39.071 START TEST rpc_integrity 00:05:39.071 ************************************ 00:05:39.071 13:41:16 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:39.071 13:41:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.071 13:41:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.071 13:41:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.071 13:41:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.071 13:41:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.071 13:41:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.071 13:41:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.071 13:41:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.071 13:41:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.071 13:41:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.071 13:41:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.071 13:41:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:39.071 13:41:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.071 13:41:16 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.071 13:41:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.071 13:41:16 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.071 13:41:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.071 { 00:05:39.071 "name": "Malloc0", 00:05:39.071 "aliases": [ 00:05:39.071 "ad97273e-7bb8-4d1b-9d22-cb337d96a98b" 00:05:39.071 ], 00:05:39.071 "product_name": "Malloc disk", 00:05:39.071 "block_size": 512, 00:05:39.071 "num_blocks": 16384, 00:05:39.071 "uuid": "ad97273e-7bb8-4d1b-9d22-cb337d96a98b", 00:05:39.071 "assigned_rate_limits": { 00:05:39.071 "rw_ios_per_sec": 0, 00:05:39.071 "rw_mbytes_per_sec": 0, 00:05:39.071 "r_mbytes_per_sec": 0, 00:05:39.071 "w_mbytes_per_sec": 0 00:05:39.071 }, 00:05:39.071 "claimed": false, 00:05:39.071 "zoned": false, 00:05:39.071 "supported_io_types": { 00:05:39.071 "read": true, 00:05:39.071 "write": true, 00:05:39.071 "unmap": true, 00:05:39.071 "write_zeroes": true, 00:05:39.071 "flush": true, 00:05:39.071 "reset": true, 00:05:39.071 "compare": false, 00:05:39.071 "compare_and_write": false, 00:05:39.071 "abort": true, 00:05:39.071 "nvme_admin": false, 00:05:39.071 "nvme_io": false 00:05:39.071 }, 00:05:39.071 "memory_domains": [ 00:05:39.072 { 00:05:39.072 "dma_device_id": "system", 00:05:39.072 "dma_device_type": 1 00:05:39.072 }, 00:05:39.072 { 00:05:39.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.072 "dma_device_type": 2 00:05:39.072 } 00:05:39.072 ], 00:05:39.072 "driver_specific": {} 00:05:39.072 } 00:05:39.072 ]' 00:05:39.072 13:41:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:39.072 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.072 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:39.072 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.072 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.072 [2024-07-14 13:41:17.038191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:39.072 [2024-07-14 13:41:17.038241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.072 [2024-07-14 13:41:17.038267] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x206cd60 00:05:39.072 [2024-07-14 13:41:17.038282] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.072 [2024-07-14 13:41:17.039789] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.072 [2024-07-14 13:41:17.039819] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.072 Passthru0 00:05:39.072 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.072 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.072 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.072 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.331 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.331 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.331 { 00:05:39.331 "name": "Malloc0", 00:05:39.331 "aliases": [ 00:05:39.331 "ad97273e-7bb8-4d1b-9d22-cb337d96a98b" 00:05:39.331 ], 00:05:39.331 "product_name": "Malloc disk", 00:05:39.331 "block_size": 512, 00:05:39.331 "num_blocks": 16384, 00:05:39.331 "uuid": "ad97273e-7bb8-4d1b-9d22-cb337d96a98b", 00:05:39.331 "assigned_rate_limits": { 00:05:39.331 "rw_ios_per_sec": 0, 00:05:39.331 "rw_mbytes_per_sec": 0, 00:05:39.331 "r_mbytes_per_sec": 0, 00:05:39.331 "w_mbytes_per_sec": 0 00:05:39.331 }, 00:05:39.331 "claimed": true, 00:05:39.331 "claim_type": "exclusive_write", 00:05:39.331 "zoned": false, 00:05:39.331 "supported_io_types": { 00:05:39.331 "read": true, 00:05:39.331 "write": true, 00:05:39.331 "unmap": true, 00:05:39.331 "write_zeroes": true, 00:05:39.331 "flush": true, 00:05:39.331 "reset": true, 00:05:39.331 "compare": false, 00:05:39.331 "compare_and_write": false, 00:05:39.331 "abort": true, 00:05:39.331 "nvme_admin": false, 00:05:39.331 "nvme_io": false 00:05:39.331 }, 00:05:39.331 "memory_domains": [ 00:05:39.331 { 00:05:39.331 "dma_device_id": "system", 00:05:39.331 "dma_device_type": 1 00:05:39.331 }, 00:05:39.331 { 00:05:39.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.331 "dma_device_type": 2 00:05:39.331 } 00:05:39.331 ], 00:05:39.331 "driver_specific": {} 00:05:39.331 }, 00:05:39.331 { 00:05:39.331 "name": "Passthru0", 00:05:39.331 "aliases": [ 00:05:39.331 "679c394e-22e4-56c1-9810-65435e448cb7" 00:05:39.331 ], 00:05:39.331 "product_name": "passthru", 00:05:39.331 "block_size": 512, 00:05:39.331 "num_blocks": 16384, 00:05:39.331 "uuid": "679c394e-22e4-56c1-9810-65435e448cb7", 00:05:39.331 "assigned_rate_limits": { 00:05:39.331 "rw_ios_per_sec": 0, 00:05:39.331 "rw_mbytes_per_sec": 0, 00:05:39.331 "r_mbytes_per_sec": 0, 00:05:39.331 "w_mbytes_per_sec": 0 00:05:39.331 }, 00:05:39.331 "claimed": false, 00:05:39.331 "zoned": false, 00:05:39.331 "supported_io_types": { 00:05:39.331 "read": true, 00:05:39.331 "write": true, 00:05:39.331 "unmap": true, 00:05:39.331 "write_zeroes": true, 00:05:39.331 "flush": true, 00:05:39.331 "reset": true, 00:05:39.331 "compare": false, 00:05:39.331 "compare_and_write": false, 00:05:39.331 "abort": true, 00:05:39.331 "nvme_admin": false, 00:05:39.331 "nvme_io": false 00:05:39.331 }, 00:05:39.331 "memory_domains": [ 00:05:39.331 { 00:05:39.331 "dma_device_id": "system", 00:05:39.331 "dma_device_type": 1 00:05:39.331 }, 00:05:39.331 { 00:05:39.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.331 "dma_device_type": 2 00:05:39.331 } 00:05:39.331 ], 00:05:39.331 "driver_specific": { 00:05:39.331 "passthru": { 00:05:39.331 "name": "Passthru0", 00:05:39.331 "base_bdev_name": "Malloc0" 00:05:39.331 } 00:05:39.331 } 00:05:39.331 } 00:05:39.331 ]' 00:05:39.332 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:39.332 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.332 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.332 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.332 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.332 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.332 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:39.332 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.332 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.332 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.332 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.332 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.332 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.332 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.332 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.332 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.332 13:41:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.332 00:05:39.332 real 0m0.229s 00:05:39.332 user 0m0.152s 00:05:39.332 sys 0m0.017s 00:05:39.332 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.332 13:41:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.332 ************************************ 00:05:39.332 END TEST rpc_integrity 00:05:39.332 ************************************ 00:05:39.332 13:41:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:39.332 13:41:17 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.332 13:41:17 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.332 13:41:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.332 ************************************ 00:05:39.332 START TEST rpc_plugins 00:05:39.332 ************************************ 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:39.332 13:41:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.332 13:41:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:39.332 13:41:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.332 13:41:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:39.332 { 00:05:39.332 "name": "Malloc1", 00:05:39.332 "aliases": [ 00:05:39.332 "200aa26a-0c5a-4b92-9dfc-3e209b0fe447" 00:05:39.332 ], 00:05:39.332 "product_name": "Malloc disk", 00:05:39.332 "block_size": 4096, 00:05:39.332 "num_blocks": 256, 00:05:39.332 "uuid": "200aa26a-0c5a-4b92-9dfc-3e209b0fe447", 00:05:39.332 "assigned_rate_limits": { 00:05:39.332 "rw_ios_per_sec": 0, 00:05:39.332 "rw_mbytes_per_sec": 0, 00:05:39.332 "r_mbytes_per_sec": 0, 00:05:39.332 "w_mbytes_per_sec": 0 00:05:39.332 }, 00:05:39.332 "claimed": false, 00:05:39.332 "zoned": false, 00:05:39.332 "supported_io_types": { 00:05:39.332 "read": true, 00:05:39.332 "write": true, 00:05:39.332 "unmap": true, 00:05:39.332 "write_zeroes": true, 00:05:39.332 "flush": true, 00:05:39.332 "reset": true, 00:05:39.332 "compare": false, 00:05:39.332 "compare_and_write": false, 00:05:39.332 "abort": true, 00:05:39.332 "nvme_admin": false, 00:05:39.332 "nvme_io": false 00:05:39.332 }, 00:05:39.332 "memory_domains": [ 00:05:39.332 { 00:05:39.332 "dma_device_id": "system", 00:05:39.332 "dma_device_type": 1 00:05:39.332 }, 00:05:39.332 { 00:05:39.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.332 "dma_device_type": 2 00:05:39.332 } 00:05:39.332 ], 00:05:39.332 "driver_specific": {} 00:05:39.332 } 00:05:39.332 ]' 00:05:39.332 13:41:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:39.332 13:41:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:39.332 13:41:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.332 13:41:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.332 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.332 13:41:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:39.332 13:41:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:39.590 13:41:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:39.591 00:05:39.591 real 0m0.113s 00:05:39.591 user 0m0.071s 00:05:39.591 sys 0m0.013s 00:05:39.591 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.591 13:41:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:39.591 ************************************ 00:05:39.591 END TEST rpc_plugins 00:05:39.591 ************************************ 00:05:39.591 13:41:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:39.591 13:41:17 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.591 13:41:17 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.591 13:41:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.591 ************************************ 00:05:39.591 START TEST rpc_trace_cmd_test 00:05:39.591 ************************************ 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:39.591 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1321573", 00:05:39.591 "tpoint_group_mask": "0x8", 00:05:39.591 "iscsi_conn": { 00:05:39.591 "mask": "0x2", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "scsi": { 00:05:39.591 "mask": "0x4", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "bdev": { 00:05:39.591 "mask": "0x8", 00:05:39.591 "tpoint_mask": "0xffffffffffffffff" 00:05:39.591 }, 00:05:39.591 "nvmf_rdma": { 00:05:39.591 "mask": "0x10", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "nvmf_tcp": { 00:05:39.591 "mask": "0x20", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "ftl": { 00:05:39.591 "mask": "0x40", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "blobfs": { 00:05:39.591 "mask": "0x80", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "dsa": { 00:05:39.591 "mask": "0x200", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "thread": { 00:05:39.591 "mask": "0x400", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "nvme_pcie": { 00:05:39.591 "mask": "0x800", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "iaa": { 00:05:39.591 "mask": "0x1000", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "nvme_tcp": { 00:05:39.591 "mask": "0x2000", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "bdev_nvme": { 00:05:39.591 "mask": "0x4000", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 }, 00:05:39.591 "sock": { 00:05:39.591 "mask": "0x8000", 00:05:39.591 "tpoint_mask": "0x0" 00:05:39.591 } 00:05:39.591 }' 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:39.591 00:05:39.591 real 0m0.199s 00:05:39.591 user 0m0.175s 00:05:39.591 sys 0m0.016s 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.591 13:41:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:39.591 ************************************ 00:05:39.591 END TEST rpc_trace_cmd_test 00:05:39.591 ************************************ 00:05:39.850 13:41:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:39.850 13:41:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:39.850 13:41:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:39.850 13:41:17 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.850 13:41:17 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.850 13:41:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.850 ************************************ 00:05:39.850 START TEST rpc_daemon_integrity 00:05:39.850 ************************************ 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.850 { 00:05:39.850 "name": "Malloc2", 00:05:39.850 "aliases": [ 00:05:39.850 "b8af5e29-a0b7-40a9-929d-e6fc3a87a110" 00:05:39.850 ], 00:05:39.850 "product_name": "Malloc disk", 00:05:39.850 "block_size": 512, 00:05:39.850 "num_blocks": 16384, 00:05:39.850 "uuid": "b8af5e29-a0b7-40a9-929d-e6fc3a87a110", 00:05:39.850 "assigned_rate_limits": { 00:05:39.850 "rw_ios_per_sec": 0, 00:05:39.850 "rw_mbytes_per_sec": 0, 00:05:39.850 "r_mbytes_per_sec": 0, 00:05:39.850 "w_mbytes_per_sec": 0 00:05:39.850 }, 00:05:39.850 "claimed": false, 00:05:39.850 "zoned": false, 00:05:39.850 "supported_io_types": { 00:05:39.850 "read": true, 00:05:39.850 "write": true, 00:05:39.850 "unmap": true, 00:05:39.850 "write_zeroes": true, 00:05:39.850 "flush": true, 00:05:39.850 "reset": true, 00:05:39.850 "compare": false, 00:05:39.850 "compare_and_write": false, 00:05:39.850 "abort": true, 00:05:39.850 "nvme_admin": false, 00:05:39.850 "nvme_io": false 00:05:39.850 }, 00:05:39.850 "memory_domains": [ 00:05:39.850 { 00:05:39.850 "dma_device_id": "system", 00:05:39.850 "dma_device_type": 1 00:05:39.850 }, 00:05:39.850 { 00:05:39.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.850 "dma_device_type": 2 00:05:39.850 } 00:05:39.850 ], 00:05:39.850 "driver_specific": {} 00:05:39.850 } 00:05:39.850 ]' 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.850 [2024-07-14 13:41:17.716251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:39.850 [2024-07-14 13:41:17.716299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.850 [2024-07-14 13:41:17.716322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x221e420 00:05:39.850 [2024-07-14 13:41:17.716339] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.850 [2024-07-14 13:41:17.717697] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.850 [2024-07-14 13:41:17.717726] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.850 Passthru0 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.850 { 00:05:39.850 "name": "Malloc2", 00:05:39.850 "aliases": [ 00:05:39.850 "b8af5e29-a0b7-40a9-929d-e6fc3a87a110" 00:05:39.850 ], 00:05:39.850 "product_name": "Malloc disk", 00:05:39.850 "block_size": 512, 00:05:39.850 "num_blocks": 16384, 00:05:39.850 "uuid": "b8af5e29-a0b7-40a9-929d-e6fc3a87a110", 00:05:39.850 "assigned_rate_limits": { 00:05:39.850 "rw_ios_per_sec": 0, 00:05:39.850 "rw_mbytes_per_sec": 0, 00:05:39.850 "r_mbytes_per_sec": 0, 00:05:39.850 "w_mbytes_per_sec": 0 00:05:39.850 }, 00:05:39.850 "claimed": true, 00:05:39.850 "claim_type": "exclusive_write", 00:05:39.850 "zoned": false, 00:05:39.850 "supported_io_types": { 00:05:39.850 "read": true, 00:05:39.850 "write": true, 00:05:39.850 "unmap": true, 00:05:39.850 "write_zeroes": true, 00:05:39.850 "flush": true, 00:05:39.850 "reset": true, 00:05:39.850 "compare": false, 00:05:39.850 "compare_and_write": false, 00:05:39.850 "abort": true, 00:05:39.850 "nvme_admin": false, 00:05:39.850 "nvme_io": false 00:05:39.850 }, 00:05:39.850 "memory_domains": [ 00:05:39.850 { 00:05:39.850 "dma_device_id": "system", 00:05:39.850 "dma_device_type": 1 00:05:39.850 }, 00:05:39.850 { 00:05:39.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.850 "dma_device_type": 2 00:05:39.850 } 00:05:39.850 ], 00:05:39.850 "driver_specific": {} 00:05:39.850 }, 00:05:39.850 { 00:05:39.850 "name": "Passthru0", 00:05:39.850 "aliases": [ 00:05:39.850 "cd3969fd-38bb-560e-ac68-5a7a36e1f08e" 00:05:39.850 ], 00:05:39.850 "product_name": "passthru", 00:05:39.850 "block_size": 512, 00:05:39.850 "num_blocks": 16384, 00:05:39.850 "uuid": "cd3969fd-38bb-560e-ac68-5a7a36e1f08e", 00:05:39.850 "assigned_rate_limits": { 00:05:39.850 "rw_ios_per_sec": 0, 00:05:39.850 "rw_mbytes_per_sec": 0, 00:05:39.850 "r_mbytes_per_sec": 0, 00:05:39.850 "w_mbytes_per_sec": 0 00:05:39.850 }, 00:05:39.850 "claimed": false, 00:05:39.850 "zoned": false, 00:05:39.850 "supported_io_types": { 00:05:39.850 "read": true, 00:05:39.850 "write": true, 00:05:39.850 "unmap": true, 00:05:39.850 "write_zeroes": true, 00:05:39.850 "flush": true, 00:05:39.850 "reset": true, 00:05:39.850 "compare": false, 00:05:39.850 "compare_and_write": false, 00:05:39.850 "abort": true, 00:05:39.850 "nvme_admin": false, 00:05:39.850 "nvme_io": false 00:05:39.850 }, 00:05:39.850 "memory_domains": [ 00:05:39.850 { 00:05:39.850 "dma_device_id": "system", 00:05:39.850 "dma_device_type": 1 00:05:39.850 }, 00:05:39.850 { 00:05:39.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.850 "dma_device_type": 2 00:05:39.850 } 00:05:39.850 ], 00:05:39.850 "driver_specific": { 00:05:39.850 "passthru": { 00:05:39.850 "name": "Passthru0", 00:05:39.850 "base_bdev_name": "Malloc2" 00:05:39.850 } 00:05:39.850 } 00:05:39.850 } 00:05:39.850 ]' 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.850 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.851 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.851 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.851 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.851 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.851 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.851 13:41:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.851 00:05:39.851 real 0m0.226s 00:05:39.851 user 0m0.149s 00:05:39.851 sys 0m0.020s 00:05:40.110 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.110 13:41:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.110 ************************************ 00:05:40.110 END TEST rpc_daemon_integrity 00:05:40.110 ************************************ 00:05:40.110 13:41:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:40.110 13:41:17 rpc -- rpc/rpc.sh@84 -- # killprocess 1321573 00:05:40.110 13:41:17 rpc -- common/autotest_common.sh@946 -- # '[' -z 1321573 ']' 00:05:40.110 13:41:17 rpc -- common/autotest_common.sh@950 -- # kill -0 1321573 00:05:40.110 13:41:17 rpc -- common/autotest_common.sh@951 -- # uname 00:05:40.110 13:41:17 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:40.110 13:41:17 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1321573 00:05:40.110 13:41:17 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:40.110 13:41:17 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:40.110 13:41:17 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1321573' 00:05:40.110 killing process with pid 1321573 00:05:40.110 13:41:17 rpc -- common/autotest_common.sh@965 -- # kill 1321573 00:05:40.110 13:41:17 rpc -- common/autotest_common.sh@970 -- # wait 1321573 00:05:40.369 00:05:40.369 real 0m1.878s 00:05:40.369 user 0m2.370s 00:05:40.369 sys 0m0.582s 00:05:40.369 13:41:18 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.369 13:41:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.369 ************************************ 00:05:40.369 END TEST rpc 00:05:40.369 ************************************ 00:05:40.369 13:41:18 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:40.369 13:41:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.369 13:41:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.369 13:41:18 -- common/autotest_common.sh@10 -- # set +x 00:05:40.369 ************************************ 00:05:40.369 START TEST skip_rpc 00:05:40.369 ************************************ 00:05:40.369 13:41:18 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:40.629 * Looking for test storage... 00:05:40.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:40.629 13:41:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.629 13:41:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:40.629 13:41:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:40.629 13:41:18 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.629 13:41:18 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.629 13:41:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.629 ************************************ 00:05:40.629 START TEST skip_rpc 00:05:40.629 ************************************ 00:05:40.629 13:41:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:40.629 13:41:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1322007 00:05:40.629 13:41:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:40.629 13:41:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.629 13:41:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:40.629 [2024-07-14 13:41:18.460393] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:40.629 [2024-07-14 13:41:18.460471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322007 ] 00:05:40.629 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.629 [2024-07-14 13:41:18.519532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.629 [2024-07-14 13:41:18.605782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1322007 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 1322007 ']' 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 1322007 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1322007 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1322007' 00:05:45.898 killing process with pid 1322007 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 1322007 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 1322007 00:05:45.898 00:05:45.898 real 0m5.445s 00:05:45.898 user 0m5.119s 00:05:45.898 sys 0m0.333s 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.898 13:41:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.898 ************************************ 00:05:45.898 END TEST skip_rpc 00:05:45.898 ************************************ 00:05:45.898 13:41:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:45.899 13:41:23 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.899 13:41:23 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.899 13:41:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.158 ************************************ 00:05:46.158 START TEST skip_rpc_with_json 00:05:46.158 ************************************ 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1322700 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1322700 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 1322700 ']' 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:46.158 13:41:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.158 [2024-07-14 13:41:23.956285] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:46.158 [2024-07-14 13:41:23.956389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322700 ] 00:05:46.158 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.158 [2024-07-14 13:41:24.019632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.158 [2024-07-14 13:41:24.106391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.418 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:46.418 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:46.418 13:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:46.418 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.418 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.418 [2024-07-14 13:41:24.364025] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:46.418 request: 00:05:46.418 { 00:05:46.418 "trtype": "tcp", 00:05:46.418 "method": "nvmf_get_transports", 00:05:46.418 "req_id": 1 00:05:46.418 } 00:05:46.418 Got JSON-RPC error response 00:05:46.418 response: 00:05:46.418 { 00:05:46.418 "code": -19, 00:05:46.418 "message": "No such device" 00:05:46.418 } 00:05:46.418 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:46.418 13:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:46.418 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.418 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.418 [2024-07-14 13:41:24.372136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.418 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.419 13:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:46.419 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.419 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.677 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.677 13:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:46.677 { 00:05:46.677 "subsystems": [ 00:05:46.677 { 00:05:46.677 "subsystem": "vfio_user_target", 00:05:46.677 "config": null 00:05:46.677 }, 00:05:46.677 { 00:05:46.677 "subsystem": "keyring", 00:05:46.677 "config": [] 00:05:46.677 }, 00:05:46.677 { 00:05:46.677 "subsystem": "iobuf", 00:05:46.677 "config": [ 00:05:46.677 { 00:05:46.677 "method": "iobuf_set_options", 00:05:46.677 "params": { 00:05:46.677 "small_pool_count": 8192, 00:05:46.677 "large_pool_count": 1024, 00:05:46.677 "small_bufsize": 8192, 00:05:46.677 "large_bufsize": 135168 00:05:46.677 } 00:05:46.677 } 00:05:46.677 ] 00:05:46.677 }, 00:05:46.677 { 00:05:46.677 "subsystem": "sock", 00:05:46.677 "config": [ 00:05:46.677 { 00:05:46.677 "method": "sock_set_default_impl", 00:05:46.677 "params": { 00:05:46.677 "impl_name": "posix" 00:05:46.677 } 00:05:46.677 }, 00:05:46.677 { 00:05:46.677 "method": "sock_impl_set_options", 00:05:46.677 "params": { 00:05:46.677 "impl_name": "ssl", 00:05:46.677 "recv_buf_size": 4096, 00:05:46.677 "send_buf_size": 4096, 00:05:46.677 "enable_recv_pipe": true, 00:05:46.677 "enable_quickack": false, 00:05:46.677 "enable_placement_id": 0, 00:05:46.677 "enable_zerocopy_send_server": true, 00:05:46.677 "enable_zerocopy_send_client": false, 00:05:46.677 "zerocopy_threshold": 0, 00:05:46.677 "tls_version": 0, 00:05:46.677 "enable_ktls": false 00:05:46.677 } 00:05:46.677 }, 00:05:46.677 { 00:05:46.677 "method": "sock_impl_set_options", 00:05:46.677 "params": { 00:05:46.677 "impl_name": "posix", 00:05:46.677 "recv_buf_size": 2097152, 00:05:46.677 "send_buf_size": 2097152, 00:05:46.677 "enable_recv_pipe": true, 00:05:46.677 "enable_quickack": false, 00:05:46.677 "enable_placement_id": 0, 00:05:46.677 "enable_zerocopy_send_server": true, 00:05:46.677 "enable_zerocopy_send_client": false, 00:05:46.677 "zerocopy_threshold": 0, 00:05:46.677 "tls_version": 0, 00:05:46.677 "enable_ktls": false 00:05:46.677 } 00:05:46.677 } 00:05:46.677 ] 00:05:46.677 }, 00:05:46.677 { 00:05:46.677 "subsystem": "vmd", 00:05:46.677 "config": [] 00:05:46.677 }, 00:05:46.677 { 00:05:46.677 "subsystem": "accel", 00:05:46.677 "config": [ 00:05:46.677 { 00:05:46.678 "method": "accel_set_options", 00:05:46.678 "params": { 00:05:46.678 "small_cache_size": 128, 00:05:46.678 "large_cache_size": 16, 00:05:46.678 "task_count": 2048, 00:05:46.678 "sequence_count": 2048, 00:05:46.678 "buf_count": 2048 00:05:46.678 } 00:05:46.678 } 00:05:46.678 ] 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "subsystem": "bdev", 00:05:46.678 "config": [ 00:05:46.678 { 00:05:46.678 "method": "bdev_set_options", 00:05:46.678 "params": { 00:05:46.678 "bdev_io_pool_size": 65535, 00:05:46.678 "bdev_io_cache_size": 256, 00:05:46.678 "bdev_auto_examine": true, 00:05:46.678 "iobuf_small_cache_size": 128, 00:05:46.678 "iobuf_large_cache_size": 16 00:05:46.678 } 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "method": "bdev_raid_set_options", 00:05:46.678 "params": { 00:05:46.678 "process_window_size_kb": 1024 00:05:46.678 } 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "method": "bdev_iscsi_set_options", 00:05:46.678 "params": { 00:05:46.678 "timeout_sec": 30 00:05:46.678 } 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "method": "bdev_nvme_set_options", 00:05:46.678 "params": { 00:05:46.678 "action_on_timeout": "none", 00:05:46.678 "timeout_us": 0, 00:05:46.678 "timeout_admin_us": 0, 00:05:46.678 "keep_alive_timeout_ms": 10000, 00:05:46.678 "arbitration_burst": 0, 00:05:46.678 "low_priority_weight": 0, 00:05:46.678 "medium_priority_weight": 0, 00:05:46.678 "high_priority_weight": 0, 00:05:46.678 "nvme_adminq_poll_period_us": 10000, 00:05:46.678 "nvme_ioq_poll_period_us": 0, 00:05:46.678 "io_queue_requests": 0, 00:05:46.678 "delay_cmd_submit": true, 00:05:46.678 "transport_retry_count": 4, 00:05:46.678 "bdev_retry_count": 3, 00:05:46.678 "transport_ack_timeout": 0, 00:05:46.678 "ctrlr_loss_timeout_sec": 0, 00:05:46.678 "reconnect_delay_sec": 0, 00:05:46.678 "fast_io_fail_timeout_sec": 0, 00:05:46.678 "disable_auto_failback": false, 00:05:46.678 "generate_uuids": false, 00:05:46.678 "transport_tos": 0, 00:05:46.678 "nvme_error_stat": false, 00:05:46.678 "rdma_srq_size": 0, 00:05:46.678 "io_path_stat": false, 00:05:46.678 "allow_accel_sequence": false, 00:05:46.678 "rdma_max_cq_size": 0, 00:05:46.678 "rdma_cm_event_timeout_ms": 0, 00:05:46.678 "dhchap_digests": [ 00:05:46.678 "sha256", 00:05:46.678 "sha384", 00:05:46.678 "sha512" 00:05:46.678 ], 00:05:46.678 "dhchap_dhgroups": [ 00:05:46.678 "null", 00:05:46.678 "ffdhe2048", 00:05:46.678 "ffdhe3072", 00:05:46.678 "ffdhe4096", 00:05:46.678 "ffdhe6144", 00:05:46.678 "ffdhe8192" 00:05:46.678 ] 00:05:46.678 } 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "method": "bdev_nvme_set_hotplug", 00:05:46.678 "params": { 00:05:46.678 "period_us": 100000, 00:05:46.678 "enable": false 00:05:46.678 } 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "method": "bdev_wait_for_examine" 00:05:46.678 } 00:05:46.678 ] 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "subsystem": "scsi", 00:05:46.678 "config": null 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "subsystem": "scheduler", 00:05:46.678 "config": [ 00:05:46.678 { 00:05:46.678 "method": "framework_set_scheduler", 00:05:46.678 "params": { 00:05:46.678 "name": "static" 00:05:46.678 } 00:05:46.678 } 00:05:46.678 ] 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "subsystem": "vhost_scsi", 00:05:46.678 "config": [] 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "subsystem": "vhost_blk", 00:05:46.678 "config": [] 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "subsystem": "ublk", 00:05:46.678 "config": [] 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "subsystem": "nbd", 00:05:46.678 "config": [] 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "subsystem": "nvmf", 00:05:46.678 "config": [ 00:05:46.678 { 00:05:46.678 "method": "nvmf_set_config", 00:05:46.678 "params": { 00:05:46.678 "discovery_filter": "match_any", 00:05:46.678 "admin_cmd_passthru": { 00:05:46.678 "identify_ctrlr": false 00:05:46.678 } 00:05:46.678 } 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "method": "nvmf_set_max_subsystems", 00:05:46.678 "params": { 00:05:46.678 "max_subsystems": 1024 00:05:46.678 } 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "method": "nvmf_set_crdt", 00:05:46.678 "params": { 00:05:46.678 "crdt1": 0, 00:05:46.678 "crdt2": 0, 00:05:46.678 "crdt3": 0 00:05:46.678 } 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "method": "nvmf_create_transport", 00:05:46.678 "params": { 00:05:46.678 "trtype": "TCP", 00:05:46.678 "max_queue_depth": 128, 00:05:46.678 "max_io_qpairs_per_ctrlr": 127, 00:05:46.678 "in_capsule_data_size": 4096, 00:05:46.678 "max_io_size": 131072, 00:05:46.678 "io_unit_size": 131072, 00:05:46.678 "max_aq_depth": 128, 00:05:46.678 "num_shared_buffers": 511, 00:05:46.678 "buf_cache_size": 4294967295, 00:05:46.678 "dif_insert_or_strip": false, 00:05:46.678 "zcopy": false, 00:05:46.678 "c2h_success": true, 00:05:46.678 "sock_priority": 0, 00:05:46.678 "abort_timeout_sec": 1, 00:05:46.678 "ack_timeout": 0, 00:05:46.678 "data_wr_pool_size": 0 00:05:46.678 } 00:05:46.678 } 00:05:46.678 ] 00:05:46.678 }, 00:05:46.678 { 00:05:46.678 "subsystem": "iscsi", 00:05:46.678 "config": [ 00:05:46.678 { 00:05:46.678 "method": "iscsi_set_options", 00:05:46.678 "params": { 00:05:46.678 "node_base": "iqn.2016-06.io.spdk", 00:05:46.678 "max_sessions": 128, 00:05:46.678 "max_connections_per_session": 2, 00:05:46.678 "max_queue_depth": 64, 00:05:46.678 "default_time2wait": 2, 00:05:46.678 "default_time2retain": 20, 00:05:46.678 "first_burst_length": 8192, 00:05:46.678 "immediate_data": true, 00:05:46.678 "allow_duplicated_isid": false, 00:05:46.678 "error_recovery_level": 0, 00:05:46.678 "nop_timeout": 60, 00:05:46.678 "nop_in_interval": 30, 00:05:46.678 "disable_chap": false, 00:05:46.678 "require_chap": false, 00:05:46.678 "mutual_chap": false, 00:05:46.678 "chap_group": 0, 00:05:46.678 "max_large_datain_per_connection": 64, 00:05:46.678 "max_r2t_per_connection": 4, 00:05:46.678 "pdu_pool_size": 36864, 00:05:46.678 "immediate_data_pool_size": 16384, 00:05:46.678 "data_out_pool_size": 2048 00:05:46.678 } 00:05:46.678 } 00:05:46.678 ] 00:05:46.678 } 00:05:46.678 ] 00:05:46.678 } 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1322700 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1322700 ']' 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1322700 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1322700 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1322700' 00:05:46.678 killing process with pid 1322700 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1322700 00:05:46.678 13:41:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1322700 00:05:47.247 13:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1322840 00:05:47.247 13:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:47.247 13:41:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:52.514 13:41:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1322840 00:05:52.514 13:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1322840 ']' 00:05:52.514 13:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1322840 00:05:52.514 13:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:52.514 13:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:52.514 13:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1322840 00:05:52.514 13:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:52.514 13:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:52.514 13:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1322840' 00:05:52.514 killing process with pid 1322840 00:05:52.514 13:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1322840 00:05:52.514 13:41:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1322840 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:52.514 00:05:52.514 real 0m6.477s 00:05:52.514 user 0m6.065s 00:05:52.514 sys 0m0.690s 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.514 ************************************ 00:05:52.514 END TEST skip_rpc_with_json 00:05:52.514 ************************************ 00:05:52.514 13:41:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:52.514 13:41:30 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.514 13:41:30 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.514 13:41:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.514 ************************************ 00:05:52.514 START TEST skip_rpc_with_delay 00:05:52.514 ************************************ 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:52.514 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.514 [2024-07-14 13:41:30.483559] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:52.514 [2024-07-14 13:41:30.483673] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:52.772 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:52.772 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.772 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:52.772 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.772 00:05:52.772 real 0m0.069s 00:05:52.772 user 0m0.042s 00:05:52.772 sys 0m0.026s 00:05:52.772 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.772 13:41:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:52.772 ************************************ 00:05:52.772 END TEST skip_rpc_with_delay 00:05:52.772 ************************************ 00:05:52.772 13:41:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:52.772 13:41:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:52.772 13:41:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:52.772 13:41:30 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.772 13:41:30 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.772 13:41:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.772 ************************************ 00:05:52.772 START TEST exit_on_failed_rpc_init 00:05:52.772 ************************************ 00:05:52.772 13:41:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:52.772 13:41:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1323552 00:05:52.772 13:41:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.772 13:41:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1323552 00:05:52.772 13:41:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 1323552 ']' 00:05:52.772 13:41:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.772 13:41:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:52.772 13:41:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.772 13:41:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:52.772 13:41:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.772 [2024-07-14 13:41:30.596553] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:52.772 [2024-07-14 13:41:30.596654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323552 ] 00:05:52.772 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.772 [2024-07-14 13:41:30.659505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.772 [2024-07-14 13:41:30.746244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.029 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:53.030 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.287 [2024-07-14 13:41:31.058344] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:53.287 [2024-07-14 13:41:31.058430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323564 ] 00:05:53.287 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.287 [2024-07-14 13:41:31.121207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.287 [2024-07-14 13:41:31.213497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.287 [2024-07-14 13:41:31.213626] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:53.287 [2024-07-14 13:41:31.213649] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:53.287 [2024-07-14 13:41:31.213662] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1323552 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 1323552 ']' 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 1323552 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1323552 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1323552' 00:05:53.547 killing process with pid 1323552 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 1323552 00:05:53.547 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 1323552 00:05:53.805 00:05:53.805 real 0m1.188s 00:05:53.805 user 0m1.274s 00:05:53.805 sys 0m0.462s 00:05:53.805 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.805 13:41:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.805 ************************************ 00:05:53.805 END TEST exit_on_failed_rpc_init 00:05:53.805 ************************************ 00:05:53.805 13:41:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:53.805 00:05:53.805 real 0m13.433s 00:05:53.805 user 0m12.611s 00:05:53.805 sys 0m1.670s 00:05:53.805 13:41:31 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.805 13:41:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.805 ************************************ 00:05:53.805 END TEST skip_rpc 00:05:53.805 ************************************ 00:05:53.805 13:41:31 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.805 13:41:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.805 13:41:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.805 13:41:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.066 ************************************ 00:05:54.066 START TEST rpc_client 00:05:54.066 ************************************ 00:05:54.066 13:41:31 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:54.066 * Looking for test storage... 00:05:54.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:54.066 13:41:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:54.066 OK 00:05:54.066 13:41:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:54.066 00:05:54.066 real 0m0.066s 00:05:54.066 user 0m0.029s 00:05:54.066 sys 0m0.042s 00:05:54.066 13:41:31 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.066 13:41:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:54.066 ************************************ 00:05:54.066 END TEST rpc_client 00:05:54.066 ************************************ 00:05:54.066 13:41:31 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:54.066 13:41:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.066 13:41:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.066 13:41:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.066 ************************************ 00:05:54.066 START TEST json_config 00:05:54.066 ************************************ 00:05:54.066 13:41:31 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.066 13:41:31 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.066 13:41:31 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.066 13:41:31 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.066 13:41:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.066 13:41:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.066 13:41:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.066 13:41:31 json_config -- paths/export.sh@5 -- # export PATH 00:05:54.066 13:41:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@47 -- # : 0 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:54.066 13:41:31 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:54.066 INFO: JSON configuration test init 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:54.066 13:41:31 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:54.066 13:41:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:54.066 13:41:31 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:54.066 13:41:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.066 13:41:31 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:54.066 13:41:31 json_config -- json_config/common.sh@9 -- # local app=target 00:05:54.066 13:41:31 json_config -- json_config/common.sh@10 -- # shift 00:05:54.066 13:41:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.066 13:41:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.066 13:41:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.066 13:41:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.067 13:41:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.067 13:41:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1323806 00:05:54.067 13:41:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:54.067 13:41:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.067 Waiting for target to run... 00:05:54.067 13:41:31 json_config -- json_config/common.sh@25 -- # waitforlisten 1323806 /var/tmp/spdk_tgt.sock 00:05:54.067 13:41:31 json_config -- common/autotest_common.sh@827 -- # '[' -z 1323806 ']' 00:05:54.067 13:41:31 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.067 13:41:31 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.067 13:41:31 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.067 13:41:31 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.067 13:41:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.067 [2024-07-14 13:41:32.029033] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:05:54.067 [2024-07-14 13:41:32.029131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323806 ] 00:05:54.326 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.586 [2024-07-14 13:41:32.362469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.586 [2024-07-14 13:41:32.425984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.154 13:41:32 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.154 13:41:32 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:55.154 13:41:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:55.154 00:05:55.154 13:41:32 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:55.154 13:41:32 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:55.154 13:41:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:55.154 13:41:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.154 13:41:32 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:55.154 13:41:32 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:55.154 13:41:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.154 13:41:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.154 13:41:32 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:55.154 13:41:32 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:55.154 13:41:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:58.457 13:41:36 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:58.457 13:41:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:58.457 13:41:36 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:58.457 13:41:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.457 13:41:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:58.457 13:41:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:58.457 13:41:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:58.457 13:41:36 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:58.457 13:41:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:58.458 13:41:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.458 13:41:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:58.458 13:41:36 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:58.458 13:41:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:58.458 13:41:36 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:58.458 13:41:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:58.716 MallocForNvmf0 00:05:58.716 13:41:36 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:58.716 13:41:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:58.974 MallocForNvmf1 00:05:58.974 13:41:36 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:58.974 13:41:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:59.232 [2024-07-14 13:41:37.118587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:59.232 13:41:37 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:59.232 13:41:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:59.490 13:41:37 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:59.490 13:41:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:59.748 13:41:37 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:59.748 13:41:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:00.007 13:41:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:00.007 13:41:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:00.266 [2024-07-14 13:41:38.109872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:00.266 13:41:38 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:00.266 13:41:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.266 13:41:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.266 13:41:38 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:00.266 13:41:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.266 13:41:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.266 13:41:38 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:00.266 13:41:38 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:00.266 13:41:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:00.561 MallocBdevForConfigChangeCheck 00:06:00.561 13:41:38 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:00.561 13:41:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.561 13:41:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.561 13:41:38 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:00.561 13:41:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:01.130 13:41:38 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:01.130 INFO: shutting down applications... 00:06:01.130 13:41:38 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:01.130 13:41:38 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:01.130 13:41:38 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:01.130 13:41:38 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:02.509 Calling clear_iscsi_subsystem 00:06:02.509 Calling clear_nvmf_subsystem 00:06:02.509 Calling clear_nbd_subsystem 00:06:02.509 Calling clear_ublk_subsystem 00:06:02.509 Calling clear_vhost_blk_subsystem 00:06:02.509 Calling clear_vhost_scsi_subsystem 00:06:02.509 Calling clear_bdev_subsystem 00:06:02.509 13:41:40 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:02.509 13:41:40 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:02.509 13:41:40 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:02.509 13:41:40 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.509 13:41:40 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:02.509 13:41:40 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:03.076 13:41:40 json_config -- json_config/json_config.sh@345 -- # break 00:06:03.076 13:41:40 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:03.076 13:41:40 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:03.076 13:41:40 json_config -- json_config/common.sh@31 -- # local app=target 00:06:03.076 13:41:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:03.076 13:41:40 json_config -- json_config/common.sh@35 -- # [[ -n 1323806 ]] 00:06:03.076 13:41:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1323806 00:06:03.076 13:41:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:03.076 13:41:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.076 13:41:40 json_config -- json_config/common.sh@41 -- # kill -0 1323806 00:06:03.076 13:41:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:03.641 13:41:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:03.641 13:41:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.641 13:41:41 json_config -- json_config/common.sh@41 -- # kill -0 1323806 00:06:03.641 13:41:41 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:03.641 13:41:41 json_config -- json_config/common.sh@43 -- # break 00:06:03.641 13:41:41 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:03.641 13:41:41 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:03.641 SPDK target shutdown done 00:06:03.641 13:41:41 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:03.641 INFO: relaunching applications... 00:06:03.641 13:41:41 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.641 13:41:41 json_config -- json_config/common.sh@9 -- # local app=target 00:06:03.641 13:41:41 json_config -- json_config/common.sh@10 -- # shift 00:06:03.641 13:41:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.641 13:41:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.641 13:41:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.641 13:41:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.641 13:41:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.641 13:41:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1324999 00:06:03.642 13:41:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.642 13:41:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.642 Waiting for target to run... 00:06:03.642 13:41:41 json_config -- json_config/common.sh@25 -- # waitforlisten 1324999 /var/tmp/spdk_tgt.sock 00:06:03.642 13:41:41 json_config -- common/autotest_common.sh@827 -- # '[' -z 1324999 ']' 00:06:03.642 13:41:41 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.642 13:41:41 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.642 13:41:41 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.642 13:41:41 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.642 13:41:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.642 [2024-07-14 13:41:41.372310] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:03.642 [2024-07-14 13:41:41.372393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324999 ] 00:06:03.642 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.900 [2024-07-14 13:41:41.874130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.157 [2024-07-14 13:41:41.955707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.442 [2024-07-14 13:41:44.987869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.442 [2024-07-14 13:41:45.020340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:07.442 13:41:45 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.442 13:41:45 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:07.442 13:41:45 json_config -- json_config/common.sh@26 -- # echo '' 00:06:07.442 00:06:07.442 13:41:45 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:07.442 13:41:45 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:07.442 INFO: Checking if target configuration is the same... 00:06:07.443 13:41:45 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.443 13:41:45 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:07.443 13:41:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.443 + '[' 2 -ne 2 ']' 00:06:07.443 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:07.443 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:07.443 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:07.443 +++ basename /dev/fd/62 00:06:07.443 ++ mktemp /tmp/62.XXX 00:06:07.443 + tmp_file_1=/tmp/62.a6q 00:06:07.443 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.443 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:07.443 + tmp_file_2=/tmp/spdk_tgt_config.json.qsU 00:06:07.443 + ret=0 00:06:07.443 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.701 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.701 + diff -u /tmp/62.a6q /tmp/spdk_tgt_config.json.qsU 00:06:07.701 + echo 'INFO: JSON config files are the same' 00:06:07.701 INFO: JSON config files are the same 00:06:07.701 + rm /tmp/62.a6q /tmp/spdk_tgt_config.json.qsU 00:06:07.701 + exit 0 00:06:07.701 13:41:45 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:07.701 13:41:45 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:07.701 INFO: changing configuration and checking if this can be detected... 00:06:07.701 13:41:45 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.701 13:41:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.959 13:41:45 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.959 13:41:45 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:07.959 13:41:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.959 + '[' 2 -ne 2 ']' 00:06:07.959 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:07.959 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:07.959 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:07.959 +++ basename /dev/fd/62 00:06:07.959 ++ mktemp /tmp/62.XXX 00:06:07.959 + tmp_file_1=/tmp/62.ky8 00:06:07.959 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.959 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:07.959 + tmp_file_2=/tmp/spdk_tgt_config.json.ZTH 00:06:07.959 + ret=0 00:06:07.959 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:08.217 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:08.217 + diff -u /tmp/62.ky8 /tmp/spdk_tgt_config.json.ZTH 00:06:08.217 + ret=1 00:06:08.217 + echo '=== Start of file: /tmp/62.ky8 ===' 00:06:08.217 + cat /tmp/62.ky8 00:06:08.217 + echo '=== End of file: /tmp/62.ky8 ===' 00:06:08.217 + echo '' 00:06:08.217 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ZTH ===' 00:06:08.217 + cat /tmp/spdk_tgt_config.json.ZTH 00:06:08.217 + echo '=== End of file: /tmp/spdk_tgt_config.json.ZTH ===' 00:06:08.217 + echo '' 00:06:08.217 + rm /tmp/62.ky8 /tmp/spdk_tgt_config.json.ZTH 00:06:08.217 + exit 1 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:08.217 INFO: configuration change detected. 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:08.217 13:41:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:08.217 13:41:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@317 -- # [[ -n 1324999 ]] 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:08.217 13:41:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:08.217 13:41:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:08.217 13:41:46 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:08.217 13:41:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.217 13:41:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.476 13:41:46 json_config -- json_config/json_config.sh@323 -- # killprocess 1324999 00:06:08.476 13:41:46 json_config -- common/autotest_common.sh@946 -- # '[' -z 1324999 ']' 00:06:08.476 13:41:46 json_config -- common/autotest_common.sh@950 -- # kill -0 1324999 00:06:08.476 13:41:46 json_config -- common/autotest_common.sh@951 -- # uname 00:06:08.476 13:41:46 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.476 13:41:46 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1324999 00:06:08.476 13:41:46 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:08.476 13:41:46 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:08.476 13:41:46 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1324999' 00:06:08.476 killing process with pid 1324999 00:06:08.476 13:41:46 json_config -- common/autotest_common.sh@965 -- # kill 1324999 00:06:08.477 13:41:46 json_config -- common/autotest_common.sh@970 -- # wait 1324999 00:06:09.856 13:41:47 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.856 13:41:47 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:09.856 13:41:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:09.856 13:41:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.115 13:41:47 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:10.115 13:41:47 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:10.115 INFO: Success 00:06:10.115 00:06:10.115 real 0m15.931s 00:06:10.115 user 0m17.718s 00:06:10.115 sys 0m1.993s 00:06:10.115 13:41:47 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.115 13:41:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.115 ************************************ 00:06:10.115 END TEST json_config 00:06:10.115 ************************************ 00:06:10.115 13:41:47 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.115 13:41:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.115 13:41:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.115 13:41:47 -- common/autotest_common.sh@10 -- # set +x 00:06:10.115 ************************************ 00:06:10.115 START TEST json_config_extra_key 00:06:10.115 ************************************ 00:06:10.115 13:41:47 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.115 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:10.115 13:41:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.116 13:41:47 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.116 13:41:47 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.116 13:41:47 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.116 13:41:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.116 13:41:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.116 13:41:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.116 13:41:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.116 13:41:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:10.116 13:41:47 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.116 INFO: launching applications... 00:06:10.116 13:41:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.116 13:41:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.116 13:41:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.116 13:41:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.116 13:41:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.116 13:41:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.116 13:41:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.116 13:41:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.116 13:41:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1325908 00:06:10.116 13:41:47 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.116 13:41:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.116 Waiting for target to run... 00:06:10.116 13:41:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1325908 /var/tmp/spdk_tgt.sock 00:06:10.116 13:41:47 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 1325908 ']' 00:06:10.116 13:41:47 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.116 13:41:47 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.116 13:41:47 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.116 13:41:47 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.116 13:41:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.116 [2024-07-14 13:41:48.000834] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:10.116 [2024-07-14 13:41:48.000982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325908 ] 00:06:10.116 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.373 [2024-07-14 13:41:48.344307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.633 [2024-07-14 13:41:48.407817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.203 13:41:48 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.203 13:41:48 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:11.203 13:41:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.203 00:06:11.203 13:41:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.203 INFO: shutting down applications... 00:06:11.203 13:41:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.203 13:41:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.203 13:41:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.203 13:41:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1325908 ]] 00:06:11.203 13:41:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1325908 00:06:11.203 13:41:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.203 13:41:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.203 13:41:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1325908 00:06:11.203 13:41:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.770 13:41:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.770 13:41:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.770 13:41:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1325908 00:06:11.770 13:41:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.770 13:41:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:11.770 13:41:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.770 13:41:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.770 SPDK target shutdown done 00:06:11.770 13:41:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:11.770 Success 00:06:11.770 00:06:11.770 real 0m1.557s 00:06:11.770 user 0m1.503s 00:06:11.770 sys 0m0.444s 00:06:11.770 13:41:49 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.770 13:41:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.770 ************************************ 00:06:11.770 END TEST json_config_extra_key 00:06:11.770 ************************************ 00:06:11.770 13:41:49 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.770 13:41:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.770 13:41:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.770 13:41:49 -- common/autotest_common.sh@10 -- # set +x 00:06:11.770 ************************************ 00:06:11.770 START TEST alias_rpc 00:06:11.770 ************************************ 00:06:11.770 13:41:49 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.770 * Looking for test storage... 00:06:11.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:11.770 13:41:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.770 13:41:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1326215 00:06:11.770 13:41:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.770 13:41:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1326215 00:06:11.770 13:41:49 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 1326215 ']' 00:06:11.770 13:41:49 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.770 13:41:49 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.770 13:41:49 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.770 13:41:49 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.770 13:41:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.770 [2024-07-14 13:41:49.599735] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:11.770 [2024-07-14 13:41:49.599831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326215 ] 00:06:11.770 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.770 [2024-07-14 13:41:49.658261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.770 [2024-07-14 13:41:49.741692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.028 13:41:49 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.028 13:41:49 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:12.028 13:41:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:12.286 13:41:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1326215 00:06:12.286 13:41:50 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 1326215 ']' 00:06:12.286 13:41:50 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 1326215 00:06:12.544 13:41:50 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:12.544 13:41:50 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.544 13:41:50 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1326215 00:06:12.544 13:41:50 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.544 13:41:50 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.544 13:41:50 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1326215' 00:06:12.544 killing process with pid 1326215 00:06:12.544 13:41:50 alias_rpc -- common/autotest_common.sh@965 -- # kill 1326215 00:06:12.544 13:41:50 alias_rpc -- common/autotest_common.sh@970 -- # wait 1326215 00:06:12.803 00:06:12.803 real 0m1.193s 00:06:12.803 user 0m1.254s 00:06:12.803 sys 0m0.430s 00:06:12.803 13:41:50 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.803 13:41:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.803 ************************************ 00:06:12.803 END TEST alias_rpc 00:06:12.803 ************************************ 00:06:12.803 13:41:50 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:12.803 13:41:50 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:12.803 13:41:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.803 13:41:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.803 13:41:50 -- common/autotest_common.sh@10 -- # set +x 00:06:12.803 ************************************ 00:06:12.803 START TEST spdkcli_tcp 00:06:12.803 ************************************ 00:06:12.803 13:41:50 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:13.062 * Looking for test storage... 00:06:13.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:13.062 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:13.062 13:41:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:13.062 13:41:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:13.062 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:13.062 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:13.062 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:13.062 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:13.062 13:41:50 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:13.062 13:41:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.062 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1326403 00:06:13.062 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:13.062 13:41:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1326403 00:06:13.062 13:41:50 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 1326403 ']' 00:06:13.062 13:41:50 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.062 13:41:50 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.062 13:41:50 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.062 13:41:50 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.062 13:41:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.062 [2024-07-14 13:41:50.854515] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:13.062 [2024-07-14 13:41:50.854607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326403 ] 00:06:13.062 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.062 [2024-07-14 13:41:50.912580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.062 [2024-07-14 13:41:50.997047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.062 [2024-07-14 13:41:50.997050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.321 13:41:51 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.321 13:41:51 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:13.321 13:41:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1326417 00:06:13.321 13:41:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:13.321 13:41:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:13.579 [ 00:06:13.579 "bdev_malloc_delete", 00:06:13.579 "bdev_malloc_create", 00:06:13.579 "bdev_null_resize", 00:06:13.579 "bdev_null_delete", 00:06:13.579 "bdev_null_create", 00:06:13.579 "bdev_nvme_cuse_unregister", 00:06:13.579 "bdev_nvme_cuse_register", 00:06:13.579 "bdev_opal_new_user", 00:06:13.579 "bdev_opal_set_lock_state", 00:06:13.579 "bdev_opal_delete", 00:06:13.579 "bdev_opal_get_info", 00:06:13.579 "bdev_opal_create", 00:06:13.579 "bdev_nvme_opal_revert", 00:06:13.579 "bdev_nvme_opal_init", 00:06:13.579 "bdev_nvme_send_cmd", 00:06:13.579 "bdev_nvme_get_path_iostat", 00:06:13.579 "bdev_nvme_get_mdns_discovery_info", 00:06:13.579 "bdev_nvme_stop_mdns_discovery", 00:06:13.579 "bdev_nvme_start_mdns_discovery", 00:06:13.579 "bdev_nvme_set_multipath_policy", 00:06:13.579 "bdev_nvme_set_preferred_path", 00:06:13.579 "bdev_nvme_get_io_paths", 00:06:13.579 "bdev_nvme_remove_error_injection", 00:06:13.579 "bdev_nvme_add_error_injection", 00:06:13.579 "bdev_nvme_get_discovery_info", 00:06:13.579 "bdev_nvme_stop_discovery", 00:06:13.579 "bdev_nvme_start_discovery", 00:06:13.579 "bdev_nvme_get_controller_health_info", 00:06:13.579 "bdev_nvme_disable_controller", 00:06:13.579 "bdev_nvme_enable_controller", 00:06:13.579 "bdev_nvme_reset_controller", 00:06:13.579 "bdev_nvme_get_transport_statistics", 00:06:13.579 "bdev_nvme_apply_firmware", 00:06:13.579 "bdev_nvme_detach_controller", 00:06:13.579 "bdev_nvme_get_controllers", 00:06:13.579 "bdev_nvme_attach_controller", 00:06:13.579 "bdev_nvme_set_hotplug", 00:06:13.579 "bdev_nvme_set_options", 00:06:13.579 "bdev_passthru_delete", 00:06:13.579 "bdev_passthru_create", 00:06:13.579 "bdev_lvol_set_parent_bdev", 00:06:13.579 "bdev_lvol_set_parent", 00:06:13.579 "bdev_lvol_check_shallow_copy", 00:06:13.579 "bdev_lvol_start_shallow_copy", 00:06:13.579 "bdev_lvol_grow_lvstore", 00:06:13.579 "bdev_lvol_get_lvols", 00:06:13.579 "bdev_lvol_get_lvstores", 00:06:13.579 "bdev_lvol_delete", 00:06:13.579 "bdev_lvol_set_read_only", 00:06:13.579 "bdev_lvol_resize", 00:06:13.579 "bdev_lvol_decouple_parent", 00:06:13.579 "bdev_lvol_inflate", 00:06:13.579 "bdev_lvol_rename", 00:06:13.579 "bdev_lvol_clone_bdev", 00:06:13.579 "bdev_lvol_clone", 00:06:13.579 "bdev_lvol_snapshot", 00:06:13.579 "bdev_lvol_create", 00:06:13.579 "bdev_lvol_delete_lvstore", 00:06:13.579 "bdev_lvol_rename_lvstore", 00:06:13.579 "bdev_lvol_create_lvstore", 00:06:13.579 "bdev_raid_set_options", 00:06:13.579 "bdev_raid_remove_base_bdev", 00:06:13.579 "bdev_raid_add_base_bdev", 00:06:13.579 "bdev_raid_delete", 00:06:13.579 "bdev_raid_create", 00:06:13.579 "bdev_raid_get_bdevs", 00:06:13.579 "bdev_error_inject_error", 00:06:13.579 "bdev_error_delete", 00:06:13.579 "bdev_error_create", 00:06:13.579 "bdev_split_delete", 00:06:13.579 "bdev_split_create", 00:06:13.579 "bdev_delay_delete", 00:06:13.579 "bdev_delay_create", 00:06:13.579 "bdev_delay_update_latency", 00:06:13.579 "bdev_zone_block_delete", 00:06:13.579 "bdev_zone_block_create", 00:06:13.579 "blobfs_create", 00:06:13.579 "blobfs_detect", 00:06:13.579 "blobfs_set_cache_size", 00:06:13.579 "bdev_aio_delete", 00:06:13.579 "bdev_aio_rescan", 00:06:13.579 "bdev_aio_create", 00:06:13.579 "bdev_ftl_set_property", 00:06:13.579 "bdev_ftl_get_properties", 00:06:13.579 "bdev_ftl_get_stats", 00:06:13.579 "bdev_ftl_unmap", 00:06:13.579 "bdev_ftl_unload", 00:06:13.579 "bdev_ftl_delete", 00:06:13.579 "bdev_ftl_load", 00:06:13.579 "bdev_ftl_create", 00:06:13.579 "bdev_virtio_attach_controller", 00:06:13.579 "bdev_virtio_scsi_get_devices", 00:06:13.579 "bdev_virtio_detach_controller", 00:06:13.579 "bdev_virtio_blk_set_hotplug", 00:06:13.579 "bdev_iscsi_delete", 00:06:13.579 "bdev_iscsi_create", 00:06:13.579 "bdev_iscsi_set_options", 00:06:13.579 "accel_error_inject_error", 00:06:13.579 "ioat_scan_accel_module", 00:06:13.579 "dsa_scan_accel_module", 00:06:13.579 "iaa_scan_accel_module", 00:06:13.579 "vfu_virtio_create_scsi_endpoint", 00:06:13.579 "vfu_virtio_scsi_remove_target", 00:06:13.579 "vfu_virtio_scsi_add_target", 00:06:13.579 "vfu_virtio_create_blk_endpoint", 00:06:13.579 "vfu_virtio_delete_endpoint", 00:06:13.579 "keyring_file_remove_key", 00:06:13.579 "keyring_file_add_key", 00:06:13.579 "keyring_linux_set_options", 00:06:13.579 "iscsi_get_histogram", 00:06:13.579 "iscsi_enable_histogram", 00:06:13.579 "iscsi_set_options", 00:06:13.579 "iscsi_get_auth_groups", 00:06:13.579 "iscsi_auth_group_remove_secret", 00:06:13.579 "iscsi_auth_group_add_secret", 00:06:13.579 "iscsi_delete_auth_group", 00:06:13.579 "iscsi_create_auth_group", 00:06:13.579 "iscsi_set_discovery_auth", 00:06:13.579 "iscsi_get_options", 00:06:13.579 "iscsi_target_node_request_logout", 00:06:13.579 "iscsi_target_node_set_redirect", 00:06:13.579 "iscsi_target_node_set_auth", 00:06:13.579 "iscsi_target_node_add_lun", 00:06:13.579 "iscsi_get_stats", 00:06:13.579 "iscsi_get_connections", 00:06:13.579 "iscsi_portal_group_set_auth", 00:06:13.579 "iscsi_start_portal_group", 00:06:13.579 "iscsi_delete_portal_group", 00:06:13.579 "iscsi_create_portal_group", 00:06:13.579 "iscsi_get_portal_groups", 00:06:13.579 "iscsi_delete_target_node", 00:06:13.579 "iscsi_target_node_remove_pg_ig_maps", 00:06:13.579 "iscsi_target_node_add_pg_ig_maps", 00:06:13.579 "iscsi_create_target_node", 00:06:13.579 "iscsi_get_target_nodes", 00:06:13.579 "iscsi_delete_initiator_group", 00:06:13.579 "iscsi_initiator_group_remove_initiators", 00:06:13.579 "iscsi_initiator_group_add_initiators", 00:06:13.579 "iscsi_create_initiator_group", 00:06:13.579 "iscsi_get_initiator_groups", 00:06:13.579 "nvmf_set_crdt", 00:06:13.579 "nvmf_set_config", 00:06:13.579 "nvmf_set_max_subsystems", 00:06:13.579 "nvmf_stop_mdns_prr", 00:06:13.579 "nvmf_publish_mdns_prr", 00:06:13.579 "nvmf_subsystem_get_listeners", 00:06:13.579 "nvmf_subsystem_get_qpairs", 00:06:13.579 "nvmf_subsystem_get_controllers", 00:06:13.579 "nvmf_get_stats", 00:06:13.579 "nvmf_get_transports", 00:06:13.579 "nvmf_create_transport", 00:06:13.579 "nvmf_get_targets", 00:06:13.579 "nvmf_delete_target", 00:06:13.579 "nvmf_create_target", 00:06:13.579 "nvmf_subsystem_allow_any_host", 00:06:13.579 "nvmf_subsystem_remove_host", 00:06:13.579 "nvmf_subsystem_add_host", 00:06:13.579 "nvmf_ns_remove_host", 00:06:13.579 "nvmf_ns_add_host", 00:06:13.579 "nvmf_subsystem_remove_ns", 00:06:13.579 "nvmf_subsystem_add_ns", 00:06:13.579 "nvmf_subsystem_listener_set_ana_state", 00:06:13.579 "nvmf_discovery_get_referrals", 00:06:13.579 "nvmf_discovery_remove_referral", 00:06:13.579 "nvmf_discovery_add_referral", 00:06:13.579 "nvmf_subsystem_remove_listener", 00:06:13.579 "nvmf_subsystem_add_listener", 00:06:13.579 "nvmf_delete_subsystem", 00:06:13.579 "nvmf_create_subsystem", 00:06:13.579 "nvmf_get_subsystems", 00:06:13.579 "env_dpdk_get_mem_stats", 00:06:13.579 "nbd_get_disks", 00:06:13.579 "nbd_stop_disk", 00:06:13.579 "nbd_start_disk", 00:06:13.579 "ublk_recover_disk", 00:06:13.579 "ublk_get_disks", 00:06:13.579 "ublk_stop_disk", 00:06:13.579 "ublk_start_disk", 00:06:13.579 "ublk_destroy_target", 00:06:13.579 "ublk_create_target", 00:06:13.579 "virtio_blk_create_transport", 00:06:13.579 "virtio_blk_get_transports", 00:06:13.579 "vhost_controller_set_coalescing", 00:06:13.579 "vhost_get_controllers", 00:06:13.579 "vhost_delete_controller", 00:06:13.579 "vhost_create_blk_controller", 00:06:13.579 "vhost_scsi_controller_remove_target", 00:06:13.580 "vhost_scsi_controller_add_target", 00:06:13.580 "vhost_start_scsi_controller", 00:06:13.580 "vhost_create_scsi_controller", 00:06:13.580 "thread_set_cpumask", 00:06:13.580 "framework_get_scheduler", 00:06:13.580 "framework_set_scheduler", 00:06:13.580 "framework_get_reactors", 00:06:13.580 "thread_get_io_channels", 00:06:13.580 "thread_get_pollers", 00:06:13.580 "thread_get_stats", 00:06:13.580 "framework_monitor_context_switch", 00:06:13.580 "spdk_kill_instance", 00:06:13.580 "log_enable_timestamps", 00:06:13.580 "log_get_flags", 00:06:13.580 "log_clear_flag", 00:06:13.580 "log_set_flag", 00:06:13.580 "log_get_level", 00:06:13.580 "log_set_level", 00:06:13.580 "log_get_print_level", 00:06:13.580 "log_set_print_level", 00:06:13.580 "framework_enable_cpumask_locks", 00:06:13.580 "framework_disable_cpumask_locks", 00:06:13.580 "framework_wait_init", 00:06:13.580 "framework_start_init", 00:06:13.580 "scsi_get_devices", 00:06:13.580 "bdev_get_histogram", 00:06:13.580 "bdev_enable_histogram", 00:06:13.580 "bdev_set_qos_limit", 00:06:13.580 "bdev_set_qd_sampling_period", 00:06:13.580 "bdev_get_bdevs", 00:06:13.580 "bdev_reset_iostat", 00:06:13.580 "bdev_get_iostat", 00:06:13.580 "bdev_examine", 00:06:13.580 "bdev_wait_for_examine", 00:06:13.580 "bdev_set_options", 00:06:13.580 "notify_get_notifications", 00:06:13.580 "notify_get_types", 00:06:13.580 "accel_get_stats", 00:06:13.580 "accel_set_options", 00:06:13.580 "accel_set_driver", 00:06:13.580 "accel_crypto_key_destroy", 00:06:13.580 "accel_crypto_keys_get", 00:06:13.580 "accel_crypto_key_create", 00:06:13.580 "accel_assign_opc", 00:06:13.580 "accel_get_module_info", 00:06:13.580 "accel_get_opc_assignments", 00:06:13.580 "vmd_rescan", 00:06:13.580 "vmd_remove_device", 00:06:13.580 "vmd_enable", 00:06:13.580 "sock_get_default_impl", 00:06:13.580 "sock_set_default_impl", 00:06:13.580 "sock_impl_set_options", 00:06:13.580 "sock_impl_get_options", 00:06:13.580 "iobuf_get_stats", 00:06:13.580 "iobuf_set_options", 00:06:13.580 "keyring_get_keys", 00:06:13.580 "framework_get_pci_devices", 00:06:13.580 "framework_get_config", 00:06:13.580 "framework_get_subsystems", 00:06:13.580 "vfu_tgt_set_base_path", 00:06:13.580 "trace_get_info", 00:06:13.580 "trace_get_tpoint_group_mask", 00:06:13.580 "trace_disable_tpoint_group", 00:06:13.580 "trace_enable_tpoint_group", 00:06:13.580 "trace_clear_tpoint_mask", 00:06:13.580 "trace_set_tpoint_mask", 00:06:13.580 "spdk_get_version", 00:06:13.580 "rpc_get_methods" 00:06:13.580 ] 00:06:13.580 13:41:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.580 13:41:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:13.580 13:41:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1326403 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 1326403 ']' 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 1326403 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1326403 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1326403' 00:06:13.580 killing process with pid 1326403 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 1326403 00:06:13.580 13:41:51 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 1326403 00:06:14.146 00:06:14.146 real 0m1.209s 00:06:14.146 user 0m2.156s 00:06:14.146 sys 0m0.451s 00:06:14.146 13:41:51 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.146 13:41:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.146 ************************************ 00:06:14.146 END TEST spdkcli_tcp 00:06:14.146 ************************************ 00:06:14.146 13:41:51 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:14.146 13:41:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:14.146 13:41:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.146 13:41:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.146 ************************************ 00:06:14.146 START TEST dpdk_mem_utility 00:06:14.146 ************************************ 00:06:14.146 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:14.146 * Looking for test storage... 00:06:14.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:14.146 13:41:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:14.146 13:41:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1326609 00:06:14.146 13:41:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.146 13:41:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1326609 00:06:14.146 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 1326609 ']' 00:06:14.146 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.146 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.146 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.146 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.146 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.146 [2024-07-14 13:41:52.104738] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:14.146 [2024-07-14 13:41:52.104806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326609 ] 00:06:14.404 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.404 [2024-07-14 13:41:52.164477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.404 [2024-07-14 13:41:52.253552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.664 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.664 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:14.664 13:41:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:14.664 13:41:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:14.664 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.664 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.664 { 00:06:14.664 "filename": "/tmp/spdk_mem_dump.txt" 00:06:14.664 } 00:06:14.664 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.664 13:41:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:14.664 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:14.664 1 heaps totaling size 814.000000 MiB 00:06:14.664 size: 814.000000 MiB heap id: 0 00:06:14.664 end heaps---------- 00:06:14.664 8 mempools totaling size 598.116089 MiB 00:06:14.664 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:14.664 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:14.664 size: 84.521057 MiB name: bdev_io_1326609 00:06:14.664 size: 51.011292 MiB name: evtpool_1326609 00:06:14.664 size: 50.003479 MiB name: msgpool_1326609 00:06:14.664 size: 21.763794 MiB name: PDU_Pool 00:06:14.664 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:14.664 size: 0.026123 MiB name: Session_Pool 00:06:14.664 end mempools------- 00:06:14.664 6 memzones totaling size 4.142822 MiB 00:06:14.664 size: 1.000366 MiB name: RG_ring_0_1326609 00:06:14.664 size: 1.000366 MiB name: RG_ring_1_1326609 00:06:14.664 size: 1.000366 MiB name: RG_ring_4_1326609 00:06:14.664 size: 1.000366 MiB name: RG_ring_5_1326609 00:06:14.664 size: 0.125366 MiB name: RG_ring_2_1326609 00:06:14.664 size: 0.015991 MiB name: RG_ring_3_1326609 00:06:14.664 end memzones------- 00:06:14.664 13:41:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:14.664 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:14.664 list of free elements. size: 12.519348 MiB 00:06:14.664 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:14.664 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:14.664 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:14.664 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:14.664 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:14.664 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:14.664 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:14.664 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:14.664 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:14.664 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:14.664 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:14.664 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:14.664 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:14.664 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:14.664 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:14.664 list of standard malloc elements. size: 199.218079 MiB 00:06:14.664 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:14.664 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:14.664 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:14.664 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:14.664 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:14.664 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:14.664 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:14.664 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:14.664 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:14.664 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:14.664 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:14.664 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:14.664 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:14.664 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:14.664 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:14.664 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:14.664 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:14.664 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:14.664 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:14.664 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:14.664 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:14.664 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:14.664 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:14.665 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:14.665 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:14.665 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:14.665 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:14.665 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:14.665 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:14.665 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:14.665 list of memzone associated elements. size: 602.262573 MiB 00:06:14.665 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:14.665 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:14.665 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:14.665 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:14.665 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:14.665 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1326609_0 00:06:14.665 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:14.665 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1326609_0 00:06:14.665 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:14.665 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1326609_0 00:06:14.665 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:14.665 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:14.665 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:14.665 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:14.665 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:14.665 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1326609 00:06:14.665 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:14.665 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1326609 00:06:14.665 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:14.665 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1326609 00:06:14.665 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:14.665 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:14.665 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:14.665 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:14.665 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:14.665 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:14.665 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:14.665 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:14.665 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:14.665 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1326609 00:06:14.665 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:14.665 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1326609 00:06:14.665 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:14.665 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1326609 00:06:14.665 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:14.665 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1326609 00:06:14.665 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:14.665 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1326609 00:06:14.665 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:14.665 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:14.665 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:14.665 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:14.665 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:14.665 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:14.665 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:14.665 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1326609 00:06:14.665 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:14.665 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:14.665 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:14.665 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:14.665 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:14.665 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1326609 00:06:14.665 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:14.665 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:14.665 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:14.665 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1326609 00:06:14.665 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:14.665 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1326609 00:06:14.665 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:14.665 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:14.665 13:41:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:14.665 13:41:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1326609 00:06:14.665 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 1326609 ']' 00:06:14.665 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 1326609 00:06:14.665 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:14.665 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.665 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1326609 00:06:14.923 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:14.923 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:14.923 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1326609' 00:06:14.923 killing process with pid 1326609 00:06:14.923 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 1326609 00:06:14.923 13:41:52 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 1326609 00:06:15.182 00:06:15.182 real 0m1.055s 00:06:15.182 user 0m1.006s 00:06:15.182 sys 0m0.424s 00:06:15.182 13:41:53 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.182 13:41:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.182 ************************************ 00:06:15.182 END TEST dpdk_mem_utility 00:06:15.182 ************************************ 00:06:15.182 13:41:53 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:15.182 13:41:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.182 13:41:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.182 13:41:53 -- common/autotest_common.sh@10 -- # set +x 00:06:15.182 ************************************ 00:06:15.182 START TEST event 00:06:15.182 ************************************ 00:06:15.182 13:41:53 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:15.182 * Looking for test storage... 00:06:15.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:15.182 13:41:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:15.182 13:41:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:15.182 13:41:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:15.182 13:41:53 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:15.182 13:41:53 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.182 13:41:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.439 ************************************ 00:06:15.439 START TEST event_perf 00:06:15.439 ************************************ 00:06:15.439 13:41:53 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:15.439 Running I/O for 1 seconds...[2024-07-14 13:41:53.195808] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:15.439 [2024-07-14 13:41:53.195890] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326798 ] 00:06:15.439 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.439 [2024-07-14 13:41:53.259576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.439 [2024-07-14 13:41:53.351771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.439 [2024-07-14 13:41:53.351834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.439 [2024-07-14 13:41:53.351993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.439 [2024-07-14 13:41:53.351996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.810 Running I/O for 1 seconds... 00:06:16.810 lcore 0: 231601 00:06:16.810 lcore 1: 231600 00:06:16.810 lcore 2: 231600 00:06:16.810 lcore 3: 231600 00:06:16.810 done. 00:06:16.810 00:06:16.810 real 0m1.254s 00:06:16.810 user 0m4.158s 00:06:16.810 sys 0m0.091s 00:06:16.810 13:41:54 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.810 13:41:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.810 ************************************ 00:06:16.810 END TEST event_perf 00:06:16.810 ************************************ 00:06:16.810 13:41:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:16.810 13:41:54 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:16.810 13:41:54 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.810 13:41:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.810 ************************************ 00:06:16.810 START TEST event_reactor 00:06:16.810 ************************************ 00:06:16.810 13:41:54 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:16.810 [2024-07-14 13:41:54.493432] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:16.810 [2024-07-14 13:41:54.493497] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326955 ] 00:06:16.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.810 [2024-07-14 13:41:54.555100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.810 [2024-07-14 13:41:54.645842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.744 test_start 00:06:17.744 oneshot 00:06:17.744 tick 100 00:06:17.744 tick 100 00:06:17.744 tick 250 00:06:17.744 tick 100 00:06:17.744 tick 100 00:06:17.744 tick 100 00:06:17.744 tick 250 00:06:17.744 tick 500 00:06:17.744 tick 100 00:06:17.744 tick 100 00:06:17.744 tick 250 00:06:17.744 tick 100 00:06:17.744 tick 100 00:06:17.744 test_end 00:06:17.744 00:06:17.744 real 0m1.241s 00:06:17.744 user 0m1.152s 00:06:17.744 sys 0m0.085s 00:06:17.744 13:41:55 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.744 13:41:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:17.744 ************************************ 00:06:17.744 END TEST event_reactor 00:06:17.744 ************************************ 00:06:18.001 13:41:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:18.001 13:41:55 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:18.001 13:41:55 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.001 13:41:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.001 ************************************ 00:06:18.001 START TEST event_reactor_perf 00:06:18.001 ************************************ 00:06:18.001 13:41:55 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:18.001 [2024-07-14 13:41:55.779985] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:18.001 [2024-07-14 13:41:55.780044] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327111 ] 00:06:18.001 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.001 [2024-07-14 13:41:55.841519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.001 [2024-07-14 13:41:55.934307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.374 test_start 00:06:19.374 test_end 00:06:19.374 Performance: 353271 events per second 00:06:19.374 00:06:19.374 real 0m1.249s 00:06:19.374 user 0m1.157s 00:06:19.374 sys 0m0.087s 00:06:19.374 13:41:57 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.374 13:41:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.374 ************************************ 00:06:19.374 END TEST event_reactor_perf 00:06:19.374 ************************************ 00:06:19.374 13:41:57 event -- event/event.sh@49 -- # uname -s 00:06:19.374 13:41:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:19.374 13:41:57 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:19.374 13:41:57 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.374 13:41:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.374 13:41:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.374 ************************************ 00:06:19.374 START TEST event_scheduler 00:06:19.374 ************************************ 00:06:19.374 13:41:57 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:19.374 * Looking for test storage... 00:06:19.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:19.374 13:41:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:19.374 13:41:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1327363 00:06:19.374 13:41:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:19.374 13:41:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.374 13:41:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1327363 00:06:19.374 13:41:57 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 1327363 ']' 00:06:19.374 13:41:57 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.374 13:41:57 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.374 13:41:57 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.374 13:41:57 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.374 13:41:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.374 [2024-07-14 13:41:57.165141] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:19.374 [2024-07-14 13:41:57.165276] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327363 ] 00:06:19.374 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.374 [2024-07-14 13:41:57.224042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.374 [2024-07-14 13:41:57.310606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.374 [2024-07-14 13:41:57.310670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.374 [2024-07-14 13:41:57.310735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.374 [2024-07-14 13:41:57.310737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.633 13:41:57 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.633 13:41:57 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:19.633 13:41:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:19.633 13:41:57 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.633 POWER: Env isn't set yet! 00:06:19.633 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:19.633 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:19.633 POWER: Cannot get available frequencies of lcore 0 00:06:19.633 POWER: Attempting to initialise PSTAT power management... 00:06:19.633 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:19.633 POWER: Initialized successfully for lcore 0 power management 00:06:19.633 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:19.633 POWER: Initialized successfully for lcore 1 power management 00:06:19.633 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:19.633 POWER: Initialized successfully for lcore 2 power management 00:06:19.633 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:19.633 POWER: Initialized successfully for lcore 3 power management 00:06:19.633 [2024-07-14 13:41:57.404082] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:19.633 [2024-07-14 13:41:57.404099] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:19.633 [2024-07-14 13:41:57.404110] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:19.633 13:41:57 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.633 13:41:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:19.633 13:41:57 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.633 [2024-07-14 13:41:57.505118] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:19.633 13:41:57 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.633 13:41:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:19.633 13:41:57 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.633 13:41:57 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.633 ************************************ 00:06:19.633 START TEST scheduler_create_thread 00:06:19.633 ************************************ 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.633 2 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.633 3 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.633 4 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.633 5 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.633 6 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.633 7 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.633 8 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.633 9 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.633 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.910 10 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.910 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.297 13:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.297 13:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:21.297 13:41:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:21.298 13:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.298 13:41:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.230 13:42:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.230 00:06:22.230 real 0m2.619s 00:06:22.230 user 0m0.011s 00:06:22.230 sys 0m0.004s 00:06:22.230 13:42:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.230 13:42:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.230 ************************************ 00:06:22.230 END TEST scheduler_create_thread 00:06:22.230 ************************************ 00:06:22.230 13:42:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:22.230 13:42:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1327363 00:06:22.230 13:42:00 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 1327363 ']' 00:06:22.230 13:42:00 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 1327363 00:06:22.230 13:42:00 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:22.230 13:42:00 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:22.230 13:42:00 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1327363 00:06:22.230 13:42:00 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:22.230 13:42:00 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:22.230 13:42:00 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1327363' 00:06:22.230 killing process with pid 1327363 00:06:22.230 13:42:00 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 1327363 00:06:22.230 13:42:00 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 1327363 00:06:22.794 [2024-07-14 13:42:00.628449] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:22.795 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:22.795 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:22.795 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:22.795 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:22.795 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:22.795 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:22.795 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:22.795 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:23.053 00:06:23.053 real 0m3.783s 00:06:23.053 user 0m5.761s 00:06:23.053 sys 0m0.311s 00:06:23.053 13:42:00 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.053 13:42:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.053 ************************************ 00:06:23.053 END TEST event_scheduler 00:06:23.053 ************************************ 00:06:23.053 13:42:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:23.053 13:42:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:23.053 13:42:00 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.053 13:42:00 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.053 13:42:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.053 ************************************ 00:06:23.053 START TEST app_repeat 00:06:23.053 ************************************ 00:06:23.053 13:42:00 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1327924 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1327924' 00:06:23.053 Process app_repeat pid: 1327924 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:23.053 spdk_app_start Round 0 00:06:23.053 13:42:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1327924 /var/tmp/spdk-nbd.sock 00:06:23.053 13:42:00 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1327924 ']' 00:06:23.053 13:42:00 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.053 13:42:00 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.053 13:42:00 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.053 13:42:00 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.053 13:42:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.053 [2024-07-14 13:42:00.933415] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:23.053 [2024-07-14 13:42:00.933478] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327924 ] 00:06:23.053 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.053 [2024-07-14 13:42:00.996516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.312 [2024-07-14 13:42:01.092223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.312 [2024-07-14 13:42:01.092227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.312 13:42:01 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.312 13:42:01 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:23.312 13:42:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.570 Malloc0 00:06:23.570 13:42:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.828 Malloc1 00:06:23.828 13:42:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.828 13:42:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.085 /dev/nbd0 00:06:24.085 13:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.085 13:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.085 13:42:02 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:24.085 13:42:02 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:24.085 13:42:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:24.085 13:42:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:24.085 13:42:02 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:24.085 13:42:02 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:24.085 13:42:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:24.085 13:42:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:24.085 13:42:02 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.085 1+0 records in 00:06:24.085 1+0 records out 00:06:24.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162705 s, 25.2 MB/s 00:06:24.085 13:42:02 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.086 13:42:02 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:24.086 13:42:02 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.086 13:42:02 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:24.086 13:42:02 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:24.086 13:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.086 13:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.086 13:42:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.342 /dev/nbd1 00:06:24.342 13:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.342 13:42:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.342 13:42:02 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:24.342 13:42:02 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:24.342 13:42:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:24.342 13:42:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:24.343 13:42:02 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:24.343 13:42:02 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:24.599 13:42:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:24.599 13:42:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:24.599 13:42:02 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.599 1+0 records in 00:06:24.599 1+0 records out 00:06:24.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248394 s, 16.5 MB/s 00:06:24.599 13:42:02 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.599 13:42:02 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:24.599 13:42:02 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.599 13:42:02 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:24.599 13:42:02 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:24.599 13:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.599 13:42:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.599 13:42:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.599 13:42:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.599 13:42:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.857 { 00:06:24.857 "nbd_device": "/dev/nbd0", 00:06:24.857 "bdev_name": "Malloc0" 00:06:24.857 }, 00:06:24.857 { 00:06:24.857 "nbd_device": "/dev/nbd1", 00:06:24.857 "bdev_name": "Malloc1" 00:06:24.857 } 00:06:24.857 ]' 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.857 { 00:06:24.857 "nbd_device": "/dev/nbd0", 00:06:24.857 "bdev_name": "Malloc0" 00:06:24.857 }, 00:06:24.857 { 00:06:24.857 "nbd_device": "/dev/nbd1", 00:06:24.857 "bdev_name": "Malloc1" 00:06:24.857 } 00:06:24.857 ]' 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.857 /dev/nbd1' 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.857 /dev/nbd1' 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.857 256+0 records in 00:06:24.857 256+0 records out 00:06:24.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379075 s, 277 MB/s 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.857 256+0 records in 00:06:24.857 256+0 records out 00:06:24.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234507 s, 44.7 MB/s 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.857 256+0 records in 00:06:24.857 256+0 records out 00:06:24.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257684 s, 40.7 MB/s 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.857 13:42:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.115 13:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.115 13:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.115 13:42:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.115 13:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.115 13:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.115 13:42:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.115 13:42:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.115 13:42:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.115 13:42:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.115 13:42:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.372 13:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.372 13:42:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.372 13:42:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.372 13:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.372 13:42:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.372 13:42:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.372 13:42:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.372 13:42:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.372 13:42:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.372 13:42:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.372 13:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.630 13:42:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.630 13:42:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.887 13:42:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.146 [2024-07-14 13:42:04.055738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.404 [2024-07-14 13:42:04.148581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.404 [2024-07-14 13:42:04.148582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.404 [2024-07-14 13:42:04.210800] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.404 [2024-07-14 13:42:04.210886] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.934 13:42:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.934 13:42:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:28.934 spdk_app_start Round 1 00:06:28.934 13:42:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1327924 /var/tmp/spdk-nbd.sock 00:06:28.934 13:42:06 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1327924 ']' 00:06:28.934 13:42:06 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.934 13:42:06 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.934 13:42:06 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.934 13:42:06 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.934 13:42:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.190 13:42:07 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.190 13:42:07 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:29.190 13:42:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.448 Malloc0 00:06:29.448 13:42:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.706 Malloc1 00:06:29.706 13:42:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.706 13:42:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.706 13:42:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.706 13:42:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.706 13:42:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.706 13:42:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.706 13:42:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.706 13:42:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.706 13:42:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.707 13:42:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.707 13:42:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.707 13:42:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.707 13:42:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.707 13:42:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.707 13:42:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.707 13:42:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.963 /dev/nbd0 00:06:29.964 13:42:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.964 13:42:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.964 1+0 records in 00:06:29.964 1+0 records out 00:06:29.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180345 s, 22.7 MB/s 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:29.964 13:42:07 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:29.964 13:42:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.964 13:42:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.964 13:42:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.221 /dev/nbd1 00:06:30.221 13:42:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.221 13:42:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.221 13:42:08 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:30.221 13:42:08 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:30.221 13:42:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:30.221 13:42:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:30.221 13:42:08 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:30.221 13:42:08 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:30.221 13:42:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:30.221 13:42:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:30.221 13:42:08 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.478 1+0 records in 00:06:30.478 1+0 records out 00:06:30.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021321 s, 19.2 MB/s 00:06:30.478 13:42:08 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.478 13:42:08 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:30.478 13:42:08 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.478 13:42:08 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:30.478 13:42:08 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:30.478 13:42:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.478 13:42:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.478 13:42:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.478 13:42:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.478 13:42:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.735 { 00:06:30.735 "nbd_device": "/dev/nbd0", 00:06:30.735 "bdev_name": "Malloc0" 00:06:30.735 }, 00:06:30.735 { 00:06:30.735 "nbd_device": "/dev/nbd1", 00:06:30.735 "bdev_name": "Malloc1" 00:06:30.735 } 00:06:30.735 ]' 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.735 { 00:06:30.735 "nbd_device": "/dev/nbd0", 00:06:30.735 "bdev_name": "Malloc0" 00:06:30.735 }, 00:06:30.735 { 00:06:30.735 "nbd_device": "/dev/nbd1", 00:06:30.735 "bdev_name": "Malloc1" 00:06:30.735 } 00:06:30.735 ]' 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.735 /dev/nbd1' 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.735 /dev/nbd1' 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.735 256+0 records in 00:06:30.735 256+0 records out 00:06:30.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485114 s, 216 MB/s 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.735 256+0 records in 00:06:30.735 256+0 records out 00:06:30.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242274 s, 43.3 MB/s 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.735 256+0 records in 00:06:30.735 256+0 records out 00:06:30.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258332 s, 40.6 MB/s 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.735 13:42:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.992 13:42:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.992 13:42:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.992 13:42:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.992 13:42:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.992 13:42:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.992 13:42:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.992 13:42:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.993 13:42:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.993 13:42:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.993 13:42:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.250 13:42:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.250 13:42:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.250 13:42:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.250 13:42:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.250 13:42:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.250 13:42:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.250 13:42:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.250 13:42:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.250 13:42:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.250 13:42:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.250 13:42:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.507 13:42:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.507 13:42:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.764 13:42:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.021 [2024-07-14 13:42:09.948226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.279 [2024-07-14 13:42:10.047622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.279 [2024-07-14 13:42:10.047626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.279 [2024-07-14 13:42:10.110497] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.279 [2024-07-14 13:42:10.110578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.808 13:42:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.808 13:42:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:34.808 spdk_app_start Round 2 00:06:34.808 13:42:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1327924 /var/tmp/spdk-nbd.sock 00:06:34.808 13:42:12 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1327924 ']' 00:06:34.808 13:42:12 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.808 13:42:12 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.808 13:42:12 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.808 13:42:12 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.808 13:42:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.065 13:42:12 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.065 13:42:12 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:35.065 13:42:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.322 Malloc0 00:06:35.322 13:42:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.580 Malloc1 00:06:35.580 13:42:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.580 13:42:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.838 /dev/nbd0 00:06:35.838 13:42:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.838 13:42:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.838 1+0 records in 00:06:35.838 1+0 records out 00:06:35.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174656 s, 23.5 MB/s 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:35.838 13:42:13 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:35.838 13:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.838 13:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.838 13:42:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.096 /dev/nbd1 00:06:36.096 13:42:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.096 13:42:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.096 1+0 records in 00:06:36.096 1+0 records out 00:06:36.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184357 s, 22.2 MB/s 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:36.096 13:42:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:36.096 13:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.096 13:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.096 13:42:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.096 13:42:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.096 13:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.353 13:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.353 { 00:06:36.353 "nbd_device": "/dev/nbd0", 00:06:36.353 "bdev_name": "Malloc0" 00:06:36.353 }, 00:06:36.353 { 00:06:36.353 "nbd_device": "/dev/nbd1", 00:06:36.353 "bdev_name": "Malloc1" 00:06:36.353 } 00:06:36.353 ]' 00:06:36.353 13:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.353 { 00:06:36.353 "nbd_device": "/dev/nbd0", 00:06:36.353 "bdev_name": "Malloc0" 00:06:36.353 }, 00:06:36.353 { 00:06:36.353 "nbd_device": "/dev/nbd1", 00:06:36.354 "bdev_name": "Malloc1" 00:06:36.354 } 00:06:36.354 ]' 00:06:36.354 13:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.611 /dev/nbd1' 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.611 /dev/nbd1' 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.611 13:42:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.611 256+0 records in 00:06:36.611 256+0 records out 00:06:36.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498997 s, 210 MB/s 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.612 256+0 records in 00:06:36.612 256+0 records out 00:06:36.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241202 s, 43.5 MB/s 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.612 256+0 records in 00:06:36.612 256+0 records out 00:06:36.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259979 s, 40.3 MB/s 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.612 13:42:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.869 13:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.869 13:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.869 13:42:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.869 13:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.869 13:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.869 13:42:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.869 13:42:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.869 13:42:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.869 13:42:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.869 13:42:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.128 13:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.128 13:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.128 13:42:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.128 13:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.128 13:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.128 13:42:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.128 13:42:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.128 13:42:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.128 13:42:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.128 13:42:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.128 13:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.386 13:42:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.386 13:42:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.645 13:42:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.938 [2024-07-14 13:42:15.744893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.938 [2024-07-14 13:42:15.833006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.938 [2024-07-14 13:42:15.833011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.938 [2024-07-14 13:42:15.888017] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.938 [2024-07-14 13:42:15.888079] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.228 13:42:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1327924 /var/tmp/spdk-nbd.sock 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1327924 ']' 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:41.228 13:42:18 event.app_repeat -- event/event.sh@39 -- # killprocess 1327924 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 1327924 ']' 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 1327924 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1327924 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1327924' 00:06:41.228 killing process with pid 1327924 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@965 -- # kill 1327924 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@970 -- # wait 1327924 00:06:41.228 spdk_app_start is called in Round 0. 00:06:41.228 Shutdown signal received, stop current app iteration 00:06:41.228 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:41.228 spdk_app_start is called in Round 1. 00:06:41.228 Shutdown signal received, stop current app iteration 00:06:41.228 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:41.228 spdk_app_start is called in Round 2. 00:06:41.228 Shutdown signal received, stop current app iteration 00:06:41.228 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:06:41.228 spdk_app_start is called in Round 3. 00:06:41.228 Shutdown signal received, stop current app iteration 00:06:41.228 13:42:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:41.228 13:42:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:41.228 00:06:41.228 real 0m18.088s 00:06:41.228 user 0m39.552s 00:06:41.228 sys 0m3.088s 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.228 13:42:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.228 ************************************ 00:06:41.228 END TEST app_repeat 00:06:41.228 ************************************ 00:06:41.228 13:42:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:41.228 13:42:19 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.228 13:42:19 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.228 13:42:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.228 13:42:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.228 ************************************ 00:06:41.228 START TEST cpu_locks 00:06:41.228 ************************************ 00:06:41.228 13:42:19 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.228 * Looking for test storage... 00:06:41.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:41.228 13:42:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:41.228 13:42:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:41.228 13:42:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:41.228 13:42:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:41.228 13:42:19 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.228 13:42:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.228 13:42:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.228 ************************************ 00:06:41.228 START TEST default_locks 00:06:41.228 ************************************ 00:06:41.228 13:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:41.228 13:42:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1330837 00:06:41.228 13:42:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.228 13:42:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1330837 00:06:41.228 13:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1330837 ']' 00:06:41.228 13:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.228 13:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.228 13:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.228 13:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.228 13:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.228 [2024-07-14 13:42:19.166731] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:41.228 [2024-07-14 13:42:19.166811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330837 ] 00:06:41.228 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.488 [2024-07-14 13:42:19.226563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.488 [2024-07-14 13:42:19.311193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.747 13:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.747 13:42:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:41.747 13:42:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1330837 00:06:41.747 13:42:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1330837 00:06:41.747 13:42:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.314 lslocks: write error 00:06:42.314 13:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1330837 00:06:42.314 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 1330837 ']' 00:06:42.314 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 1330837 00:06:42.314 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:42.314 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.314 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1330837 00:06:42.314 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.314 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.314 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1330837' 00:06:42.314 killing process with pid 1330837 00:06:42.314 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 1330837 00:06:42.314 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 1330837 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1330837 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1330837 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1330837 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1330837 ']' 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1330837) - No such process 00:06:42.574 ERROR: process (pid: 1330837) is no longer running 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:42.574 00:06:42.574 real 0m1.350s 00:06:42.574 user 0m1.301s 00:06:42.574 sys 0m0.540s 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.574 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.574 ************************************ 00:06:42.574 END TEST default_locks 00:06:42.574 ************************************ 00:06:42.574 13:42:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:42.574 13:42:20 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.574 13:42:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.574 13:42:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.574 ************************************ 00:06:42.574 START TEST default_locks_via_rpc 00:06:42.574 ************************************ 00:06:42.574 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:42.574 13:42:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1331009 00:06:42.574 13:42:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.574 13:42:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1331009 00:06:42.574 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1331009 ']' 00:06:42.574 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.574 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.574 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.574 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.574 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.833 [2024-07-14 13:42:20.577585] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:42.833 [2024-07-14 13:42:20.577680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331009 ] 00:06:42.833 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.833 [2024-07-14 13:42:20.645143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.833 [2024-07-14 13:42:20.738418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.092 13:42:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.092 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.092 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.092 13:42:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1331009 00:06:43.092 13:42:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1331009 00:06:43.092 13:42:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.663 13:42:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1331009 00:06:43.663 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 1331009 ']' 00:06:43.663 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 1331009 00:06:43.663 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:43.663 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.663 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1331009 00:06:43.663 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.663 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.663 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1331009' 00:06:43.663 killing process with pid 1331009 00:06:43.663 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 1331009 00:06:43.663 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 1331009 00:06:43.922 00:06:43.922 real 0m1.265s 00:06:43.922 user 0m1.224s 00:06:43.922 sys 0m0.554s 00:06:43.922 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.922 13:42:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.922 ************************************ 00:06:43.922 END TEST default_locks_via_rpc 00:06:43.922 ************************************ 00:06:43.922 13:42:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:43.922 13:42:21 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.922 13:42:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.922 13:42:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.922 ************************************ 00:06:43.922 START TEST non_locking_app_on_locked_coremask 00:06:43.922 ************************************ 00:06:43.922 13:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:43.922 13:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1331207 00:06:43.922 13:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.922 13:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1331207 /var/tmp/spdk.sock 00:06:43.922 13:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1331207 ']' 00:06:43.922 13:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.922 13:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.922 13:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.922 13:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.922 13:42:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.922 [2024-07-14 13:42:21.883517] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:43.922 [2024-07-14 13:42:21.883610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331207 ] 00:06:44.182 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.182 [2024-07-14 13:42:21.945262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.182 [2024-07-14 13:42:22.030321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.440 13:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.440 13:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:44.440 13:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1331294 00:06:44.440 13:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:44.441 13:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1331294 /var/tmp/spdk2.sock 00:06:44.441 13:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1331294 ']' 00:06:44.441 13:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.441 13:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.441 13:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.441 13:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.441 13:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.441 [2024-07-14 13:42:22.335884] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:44.441 [2024-07-14 13:42:22.335985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331294 ] 00:06:44.441 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.700 [2024-07-14 13:42:22.428820] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.700 [2024-07-14 13:42:22.428853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.700 [2024-07-14 13:42:22.611178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.634 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.634 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:45.634 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1331207 00:06:45.634 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1331207 00:06:45.634 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.893 lslocks: write error 00:06:45.893 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1331207 00:06:45.893 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1331207 ']' 00:06:45.893 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1331207 00:06:45.893 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:45.893 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.893 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1331207 00:06:45.893 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.893 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.893 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1331207' 00:06:45.893 killing process with pid 1331207 00:06:45.893 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1331207 00:06:45.893 13:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1331207 00:06:46.827 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1331294 00:06:46.827 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1331294 ']' 00:06:46.827 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1331294 00:06:46.828 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:46.828 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.828 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1331294 00:06:46.828 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.828 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.828 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1331294' 00:06:46.828 killing process with pid 1331294 00:06:46.828 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1331294 00:06:46.828 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1331294 00:06:47.085 00:06:47.085 real 0m3.077s 00:06:47.085 user 0m3.193s 00:06:47.085 sys 0m1.041s 00:06:47.085 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.085 13:42:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.085 ************************************ 00:06:47.085 END TEST non_locking_app_on_locked_coremask 00:06:47.085 ************************************ 00:06:47.085 13:42:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:47.085 13:42:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:47.085 13:42:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.085 13:42:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.085 ************************************ 00:06:47.085 START TEST locking_app_on_unlocked_coremask 00:06:47.085 ************************************ 00:06:47.085 13:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:47.085 13:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1331605 00:06:47.085 13:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:47.085 13:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1331605 /var/tmp/spdk.sock 00:06:47.085 13:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1331605 ']' 00:06:47.085 13:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.085 13:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.085 13:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.085 13:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.085 13:42:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.085 [2024-07-14 13:42:25.017197] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:47.085 [2024-07-14 13:42:25.017289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331605 ] 00:06:47.085 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.343 [2024-07-14 13:42:25.079957] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.343 [2024-07-14 13:42:25.079996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.343 [2024-07-14 13:42:25.168478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.603 13:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.603 13:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:47.603 13:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1331733 00:06:47.603 13:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1331733 /var/tmp/spdk2.sock 00:06:47.603 13:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.603 13:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1331733 ']' 00:06:47.603 13:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.603 13:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.603 13:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.603 13:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.603 13:42:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.603 [2024-07-14 13:42:25.470531] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:47.603 [2024-07-14 13:42:25.470613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331733 ] 00:06:47.603 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.603 [2024-07-14 13:42:25.562129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.863 [2024-07-14 13:42:25.737509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.799 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:48.799 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:48.799 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1331733 00:06:48.799 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1331733 00:06:48.799 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.057 lslocks: write error 00:06:49.057 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1331605 00:06:49.057 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1331605 ']' 00:06:49.057 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1331605 00:06:49.057 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:49.057 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:49.057 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1331605 00:06:49.057 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:49.057 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:49.057 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1331605' 00:06:49.057 killing process with pid 1331605 00:06:49.057 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1331605 00:06:49.057 13:42:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1331605 00:06:49.995 13:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1331733 00:06:49.995 13:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1331733 ']' 00:06:49.995 13:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1331733 00:06:49.995 13:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:49.995 13:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:49.995 13:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1331733 00:06:49.995 13:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:49.995 13:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:49.995 13:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1331733' 00:06:49.995 killing process with pid 1331733 00:06:49.995 13:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1331733 00:06:49.995 13:42:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1331733 00:06:50.253 00:06:50.253 real 0m3.235s 00:06:50.253 user 0m3.363s 00:06:50.253 sys 0m1.083s 00:06:50.253 13:42:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.253 13:42:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.253 ************************************ 00:06:50.253 END TEST locking_app_on_unlocked_coremask 00:06:50.253 ************************************ 00:06:50.253 13:42:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:50.253 13:42:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.253 13:42:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.253 13:42:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.513 ************************************ 00:06:50.513 START TEST locking_app_on_locked_coremask 00:06:50.513 ************************************ 00:06:50.513 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:50.513 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1332039 00:06:50.513 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.513 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1332039 /var/tmp/spdk.sock 00:06:50.513 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1332039 ']' 00:06:50.513 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.513 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.513 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.513 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.513 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.513 [2024-07-14 13:42:28.298886] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:50.513 [2024-07-14 13:42:28.298974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332039 ] 00:06:50.513 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.513 [2024-07-14 13:42:28.357452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.513 [2024-07-14 13:42:28.445529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1332136 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1332136 /var/tmp/spdk2.sock 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1332136 /var/tmp/spdk2.sock 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1332136 /var/tmp/spdk2.sock 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1332136 ']' 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.771 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.772 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.772 13:42:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.772 [2024-07-14 13:42:28.748792] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:50.772 [2024-07-14 13:42:28.748904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332136 ] 00:06:51.031 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.032 [2024-07-14 13:42:28.843045] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1332039 has claimed it. 00:06:51.032 [2024-07-14 13:42:28.843095] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1332136) - No such process 00:06:51.598 ERROR: process (pid: 1332136) is no longer running 00:06:51.598 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.598 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:51.598 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:51.598 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.598 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.598 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.598 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1332039 00:06:51.598 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1332039 00:06:51.598 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.164 lslocks: write error 00:06:52.164 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1332039 00:06:52.164 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1332039 ']' 00:06:52.164 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1332039 00:06:52.164 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:52.164 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.164 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1332039 00:06:52.164 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.164 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.164 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1332039' 00:06:52.164 killing process with pid 1332039 00:06:52.164 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1332039 00:06:52.164 13:42:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1332039 00:06:52.422 00:06:52.422 real 0m2.058s 00:06:52.422 user 0m2.213s 00:06:52.422 sys 0m0.657s 00:06:52.422 13:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.422 13:42:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.422 ************************************ 00:06:52.422 END TEST locking_app_on_locked_coremask 00:06:52.422 ************************************ 00:06:52.422 13:42:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:52.422 13:42:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.422 13:42:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.422 13:42:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.422 ************************************ 00:06:52.422 START TEST locking_overlapped_coremask 00:06:52.422 ************************************ 00:06:52.422 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:52.422 13:42:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1332332 00:06:52.422 13:42:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:52.422 13:42:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1332332 /var/tmp/spdk.sock 00:06:52.422 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1332332 ']' 00:06:52.422 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.422 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:52.422 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.422 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:52.422 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.681 [2024-07-14 13:42:30.411235] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:52.681 [2024-07-14 13:42:30.411314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332332 ] 00:06:52.681 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.681 [2024-07-14 13:42:30.475109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.681 [2024-07-14 13:42:30.567712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.681 [2024-07-14 13:42:30.567789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.681 [2024-07-14 13:42:30.567791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1332348 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1332348 /var/tmp/spdk2.sock 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1332348 /var/tmp/spdk2.sock 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1332348 /var/tmp/spdk2.sock 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1332348 ']' 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:52.940 13:42:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.940 [2024-07-14 13:42:30.879093] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:52.940 [2024-07-14 13:42:30.879200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332348 ] 00:06:52.940 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.197 [2024-07-14 13:42:30.973398] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1332332 has claimed it. 00:06:53.197 [2024-07-14 13:42:30.973467] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1332348) - No such process 00:06:53.764 ERROR: process (pid: 1332348) is no longer running 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1332332 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 1332332 ']' 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 1332332 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1332332 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1332332' 00:06:53.764 killing process with pid 1332332 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 1332332 00:06:53.764 13:42:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 1332332 00:06:54.330 00:06:54.330 real 0m1.645s 00:06:54.330 user 0m4.425s 00:06:54.330 sys 0m0.473s 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.330 ************************************ 00:06:54.330 END TEST locking_overlapped_coremask 00:06:54.330 ************************************ 00:06:54.330 13:42:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:54.330 13:42:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:54.330 13:42:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.330 13:42:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.330 ************************************ 00:06:54.330 START TEST locking_overlapped_coremask_via_rpc 00:06:54.330 ************************************ 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1332586 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1332586 /var/tmp/spdk.sock 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1332586 ']' 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.330 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.330 [2024-07-14 13:42:32.103270] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:54.330 [2024-07-14 13:42:32.103365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332586 ] 00:06:54.330 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.330 [2024-07-14 13:42:32.163025] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.330 [2024-07-14 13:42:32.163066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.330 [2024-07-14 13:42:32.252307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.330 [2024-07-14 13:42:32.252371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.330 [2024-07-14 13:42:32.252374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.587 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.587 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:54.587 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1332641 00:06:54.587 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1332641 /var/tmp/spdk2.sock 00:06:54.587 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:54.587 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1332641 ']' 00:06:54.587 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.587 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.587 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.587 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.587 13:42:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.587 [2024-07-14 13:42:32.550870] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:54.587 [2024-07-14 13:42:32.550972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332641 ] 00:06:54.846 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.846 [2024-07-14 13:42:32.640084] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.846 [2024-07-14 13:42:32.640119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.846 [2024-07-14 13:42:32.815006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.846 [2024-07-14 13:42:32.818937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.846 [2024-07-14 13:42:32.818940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.786 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:55.786 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:55.786 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:55.786 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.786 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.786 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.786 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.786 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:55.786 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.787 [2024-07-14 13:42:33.507981] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1332586 has claimed it. 00:06:55.787 request: 00:06:55.787 { 00:06:55.787 "method": "framework_enable_cpumask_locks", 00:06:55.787 "req_id": 1 00:06:55.787 } 00:06:55.787 Got JSON-RPC error response 00:06:55.787 response: 00:06:55.787 { 00:06:55.787 "code": -32603, 00:06:55.787 "message": "Failed to claim CPU core: 2" 00:06:55.787 } 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1332586 /var/tmp/spdk.sock 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1332586 ']' 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1332641 /var/tmp/spdk2.sock 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1332641 ']' 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.787 13:42:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.045 13:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.045 13:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:56.045 13:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:56.045 13:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.045 13:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.045 13:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.045 00:06:56.045 real 0m1.958s 00:06:56.045 user 0m1.006s 00:06:56.045 sys 0m0.185s 00:06:56.045 13:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.045 13:42:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.045 ************************************ 00:06:56.045 END TEST locking_overlapped_coremask_via_rpc 00:06:56.045 ************************************ 00:06:56.303 13:42:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:56.303 13:42:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1332586 ]] 00:06:56.303 13:42:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1332586 00:06:56.303 13:42:34 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1332586 ']' 00:06:56.303 13:42:34 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1332586 00:06:56.303 13:42:34 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:56.303 13:42:34 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:56.303 13:42:34 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1332586 00:06:56.303 13:42:34 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:56.303 13:42:34 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:56.303 13:42:34 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1332586' 00:06:56.303 killing process with pid 1332586 00:06:56.303 13:42:34 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1332586 00:06:56.303 13:42:34 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1332586 00:06:56.567 13:42:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1332641 ]] 00:06:56.567 13:42:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1332641 00:06:56.567 13:42:34 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1332641 ']' 00:06:56.567 13:42:34 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1332641 00:06:56.567 13:42:34 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:56.567 13:42:34 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:56.567 13:42:34 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1332641 00:06:56.567 13:42:34 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:56.567 13:42:34 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:56.567 13:42:34 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1332641' 00:06:56.567 killing process with pid 1332641 00:06:56.567 13:42:34 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1332641 00:06:56.567 13:42:34 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1332641 00:06:57.177 13:42:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.177 13:42:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.177 13:42:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1332586 ]] 00:06:57.177 13:42:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1332586 00:06:57.177 13:42:34 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1332586 ']' 00:06:57.177 13:42:34 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1332586 00:06:57.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1332586) - No such process 00:06:57.177 13:42:34 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1332586 is not found' 00:06:57.177 Process with pid 1332586 is not found 00:06:57.177 13:42:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1332641 ]] 00:06:57.177 13:42:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1332641 00:06:57.177 13:42:34 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1332641 ']' 00:06:57.177 13:42:34 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1332641 00:06:57.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1332641) - No such process 00:06:57.177 13:42:34 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1332641 is not found' 00:06:57.177 Process with pid 1332641 is not found 00:06:57.177 13:42:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.177 00:06:57.177 real 0m15.852s 00:06:57.177 user 0m27.441s 00:06:57.177 sys 0m5.435s 00:06:57.177 13:42:34 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.177 13:42:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.177 ************************************ 00:06:57.177 END TEST cpu_locks 00:06:57.177 ************************************ 00:06:57.177 00:06:57.177 real 0m41.814s 00:06:57.177 user 1m19.351s 00:06:57.177 sys 0m9.335s 00:06:57.177 13:42:34 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.177 13:42:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.177 ************************************ 00:06:57.177 END TEST event 00:06:57.177 ************************************ 00:06:57.177 13:42:34 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:57.177 13:42:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.177 13:42:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.177 13:42:34 -- common/autotest_common.sh@10 -- # set +x 00:06:57.177 ************************************ 00:06:57.177 START TEST thread 00:06:57.177 ************************************ 00:06:57.177 13:42:34 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:57.177 * Looking for test storage... 00:06:57.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:57.177 13:42:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.177 13:42:35 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:57.177 13:42:35 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.177 13:42:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.177 ************************************ 00:06:57.177 START TEST thread_poller_perf 00:06:57.177 ************************************ 00:06:57.177 13:42:35 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.177 [2024-07-14 13:42:35.050003] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:57.177 [2024-07-14 13:42:35.050060] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333010 ] 00:06:57.177 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.177 [2024-07-14 13:42:35.111395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.436 [2024-07-14 13:42:35.201163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.436 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:58.375 ====================================== 00:06:58.375 busy:2713411096 (cyc) 00:06:58.375 total_run_count: 292000 00:06:58.375 tsc_hz: 2700000000 (cyc) 00:06:58.375 ====================================== 00:06:58.375 poller_cost: 9292 (cyc), 3441 (nsec) 00:06:58.375 00:06:58.375 real 0m1.252s 00:06:58.375 user 0m1.167s 00:06:58.375 sys 0m0.079s 00:06:58.375 13:42:36 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.375 13:42:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.375 ************************************ 00:06:58.375 END TEST thread_poller_perf 00:06:58.375 ************************************ 00:06:58.375 13:42:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:58.375 13:42:36 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:58.375 13:42:36 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.375 13:42:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.375 ************************************ 00:06:58.375 START TEST thread_poller_perf 00:06:58.375 ************************************ 00:06:58.375 13:42:36 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:58.375 [2024-07-14 13:42:36.347577] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:58.375 [2024-07-14 13:42:36.347641] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333165 ] 00:06:58.634 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.634 [2024-07-14 13:42:36.413845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.634 [2024-07-14 13:42:36.506748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.634 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:00.012 ====================================== 00:07:00.012 busy:2702787707 (cyc) 00:07:00.012 total_run_count: 3938000 00:07:00.012 tsc_hz: 2700000000 (cyc) 00:07:00.012 ====================================== 00:07:00.012 poller_cost: 686 (cyc), 254 (nsec) 00:07:00.012 00:07:00.012 real 0m1.255s 00:07:00.012 user 0m1.166s 00:07:00.012 sys 0m0.083s 00:07:00.012 13:42:37 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.012 13:42:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.012 ************************************ 00:07:00.012 END TEST thread_poller_perf 00:07:00.012 ************************************ 00:07:00.012 13:42:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:00.012 00:07:00.012 real 0m2.645s 00:07:00.012 user 0m2.383s 00:07:00.012 sys 0m0.261s 00:07:00.012 13:42:37 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.012 13:42:37 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.012 ************************************ 00:07:00.012 END TEST thread 00:07:00.012 ************************************ 00:07:00.012 13:42:37 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:00.012 13:42:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:00.012 13:42:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.012 13:42:37 -- common/autotest_common.sh@10 -- # set +x 00:07:00.012 ************************************ 00:07:00.012 START TEST accel 00:07:00.012 ************************************ 00:07:00.012 13:42:37 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:00.012 * Looking for test storage... 00:07:00.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:00.012 13:42:37 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:00.012 13:42:37 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:00.012 13:42:37 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:00.012 13:42:37 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1333400 00:07:00.012 13:42:37 accel -- accel/accel.sh@63 -- # waitforlisten 1333400 00:07:00.012 13:42:37 accel -- common/autotest_common.sh@827 -- # '[' -z 1333400 ']' 00:07:00.012 13:42:37 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:00.012 13:42:37 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.012 13:42:37 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:00.012 13:42:37 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.012 13:42:37 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.012 13:42:37 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.012 13:42:37 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.012 13:42:37 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.012 13:42:37 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.012 13:42:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.012 13:42:37 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.012 13:42:37 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.012 13:42:37 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:00.012 13:42:37 accel -- accel/accel.sh@41 -- # jq -r . 00:07:00.012 [2024-07-14 13:42:37.754993] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:00.012 [2024-07-14 13:42:37.755084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333400 ] 00:07:00.012 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.012 [2024-07-14 13:42:37.818495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.012 [2024-07-14 13:42:37.908415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.271 13:42:38 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:00.271 13:42:38 accel -- common/autotest_common.sh@860 -- # return 0 00:07:00.271 13:42:38 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:00.271 13:42:38 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:00.271 13:42:38 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:00.271 13:42:38 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:00.271 13:42:38 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:00.272 13:42:38 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:00.272 13:42:38 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # IFS== 00:07:00.272 13:42:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:00.272 13:42:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:00.272 13:42:38 accel -- accel/accel.sh@75 -- # killprocess 1333400 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@946 -- # '[' -z 1333400 ']' 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@950 -- # kill -0 1333400 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@951 -- # uname 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1333400 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1333400' 00:07:00.272 killing process with pid 1333400 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@965 -- # kill 1333400 00:07:00.272 13:42:38 accel -- common/autotest_common.sh@970 -- # wait 1333400 00:07:00.838 13:42:38 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:00.838 13:42:38 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:00.838 13:42:38 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:00.838 13:42:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.838 13:42:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.838 13:42:38 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:00.838 13:42:38 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:00.838 13:42:38 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:00.838 13:42:38 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.838 13:42:38 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.838 13:42:38 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.838 13:42:38 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.839 13:42:38 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.839 13:42:38 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:00.839 13:42:38 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:00.839 13:42:38 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.839 13:42:38 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:00.839 13:42:38 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:00.839 13:42:38 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:00.839 13:42:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.839 13:42:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.839 ************************************ 00:07:00.839 START TEST accel_missing_filename 00:07:00.839 ************************************ 00:07:00.839 13:42:38 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:00.839 13:42:38 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:00.839 13:42:38 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:00.839 13:42:38 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:00.839 13:42:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.839 13:42:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:00.839 13:42:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.839 13:42:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:00.839 13:42:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:00.839 13:42:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:00.839 13:42:38 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.839 13:42:38 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.839 13:42:38 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.839 13:42:38 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.839 13:42:38 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.839 13:42:38 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:00.839 13:42:38 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:00.839 [2024-07-14 13:42:38.748105] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:00.839 [2024-07-14 13:42:38.748181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333532 ] 00:07:00.839 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.839 [2024-07-14 13:42:38.810931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.098 [2024-07-14 13:42:38.903381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.098 [2024-07-14 13:42:38.964554] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.098 [2024-07-14 13:42:39.053029] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:01.357 A filename is required. 00:07:01.357 13:42:39 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:01.357 13:42:39 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.357 13:42:39 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:01.357 13:42:39 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:01.357 13:42:39 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:01.357 13:42:39 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.357 00:07:01.357 real 0m0.403s 00:07:01.357 user 0m0.290s 00:07:01.357 sys 0m0.148s 00:07:01.357 13:42:39 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.357 13:42:39 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:01.357 ************************************ 00:07:01.357 END TEST accel_missing_filename 00:07:01.357 ************************************ 00:07:01.357 13:42:39 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:01.357 13:42:39 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:01.357 13:42:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.357 13:42:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.357 ************************************ 00:07:01.357 START TEST accel_compress_verify 00:07:01.357 ************************************ 00:07:01.357 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:01.357 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:01.357 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:01.358 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:01.358 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.358 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:01.358 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.358 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:01.358 13:42:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:01.358 13:42:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:01.358 13:42:39 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.358 13:42:39 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.358 13:42:39 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.358 13:42:39 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.358 13:42:39 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.358 13:42:39 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:01.358 13:42:39 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:01.358 [2024-07-14 13:42:39.199042] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:01.358 [2024-07-14 13:42:39.199111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333672 ] 00:07:01.358 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.358 [2024-07-14 13:42:39.261394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.615 [2024-07-14 13:42:39.354227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.615 [2024-07-14 13:42:39.415843] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.615 [2024-07-14 13:42:39.504346] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:01.615 00:07:01.615 Compression does not support the verify option, aborting. 00:07:01.615 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:01.615 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.615 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:01.615 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:01.615 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:01.615 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.615 00:07:01.615 real 0m0.402s 00:07:01.615 user 0m0.288s 00:07:01.615 sys 0m0.146s 00:07:01.615 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.615 13:42:39 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:01.615 ************************************ 00:07:01.615 END TEST accel_compress_verify 00:07:01.615 ************************************ 00:07:01.876 13:42:39 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:01.876 13:42:39 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:01.876 13:42:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.876 13:42:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.876 ************************************ 00:07:01.876 START TEST accel_wrong_workload 00:07:01.876 ************************************ 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:01.876 13:42:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:01.876 13:42:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:01.876 13:42:39 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.876 13:42:39 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.876 13:42:39 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.876 13:42:39 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.876 13:42:39 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.876 13:42:39 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:01.876 13:42:39 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:01.876 Unsupported workload type: foobar 00:07:01.876 [2024-07-14 13:42:39.640243] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:01.876 accel_perf options: 00:07:01.876 [-h help message] 00:07:01.876 [-q queue depth per core] 00:07:01.876 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:01.876 [-T number of threads per core 00:07:01.876 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:01.876 [-t time in seconds] 00:07:01.876 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:01.876 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:01.876 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:01.876 [-l for compress/decompress workloads, name of uncompressed input file 00:07:01.876 [-S for crc32c workload, use this seed value (default 0) 00:07:01.876 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:01.876 [-f for fill workload, use this BYTE value (default 255) 00:07:01.876 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:01.876 [-y verify result if this switch is on] 00:07:01.876 [-a tasks to allocate per core (default: same value as -q)] 00:07:01.876 Can be used to spread operations across a wider range of memory. 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.876 00:07:01.876 real 0m0.021s 00:07:01.876 user 0m0.014s 00:07:01.876 sys 0m0.006s 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.876 13:42:39 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:01.876 ************************************ 00:07:01.876 END TEST accel_wrong_workload 00:07:01.876 ************************************ 00:07:01.876 13:42:39 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:01.876 13:42:39 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:01.876 13:42:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.876 13:42:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.876 Error: writing output failed: Broken pipe 00:07:01.876 ************************************ 00:07:01.876 START TEST accel_negative_buffers 00:07:01.876 ************************************ 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:01.876 13:42:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:01.876 13:42:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:01.876 13:42:39 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.876 13:42:39 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.876 13:42:39 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.876 13:42:39 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.876 13:42:39 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.876 13:42:39 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:01.876 13:42:39 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:01.876 -x option must be non-negative. 00:07:01.876 [2024-07-14 13:42:39.700170] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:01.876 accel_perf options: 00:07:01.876 [-h help message] 00:07:01.876 [-q queue depth per core] 00:07:01.876 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:01.876 [-T number of threads per core 00:07:01.876 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:01.876 [-t time in seconds] 00:07:01.876 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:01.876 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:01.876 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:01.876 [-l for compress/decompress workloads, name of uncompressed input file 00:07:01.876 [-S for crc32c workload, use this seed value (default 0) 00:07:01.876 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:01.876 [-f for fill workload, use this BYTE value (default 255) 00:07:01.876 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:01.876 [-y verify result if this switch is on] 00:07:01.876 [-a tasks to allocate per core (default: same value as -q)] 00:07:01.876 Can be used to spread operations across a wider range of memory. 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.876 00:07:01.876 real 0m0.021s 00:07:01.876 user 0m0.009s 00:07:01.876 sys 0m0.012s 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.876 13:42:39 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:01.876 ************************************ 00:07:01.876 END TEST accel_negative_buffers 00:07:01.876 ************************************ 00:07:01.876 Error: writing output failed: Broken pipe 00:07:01.876 13:42:39 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:01.876 13:42:39 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:01.876 13:42:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.876 13:42:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.876 ************************************ 00:07:01.876 START TEST accel_crc32c 00:07:01.876 ************************************ 00:07:01.876 13:42:39 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:01.876 13:42:39 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:01.876 13:42:39 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:01.876 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.876 13:42:39 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:01.876 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.876 13:42:39 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:01.876 13:42:39 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:01.876 13:42:39 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.877 13:42:39 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.877 13:42:39 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.877 13:42:39 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.877 13:42:39 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.877 13:42:39 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:01.877 13:42:39 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:01.877 [2024-07-14 13:42:39.769778] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:01.877 [2024-07-14 13:42:39.769842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333743 ] 00:07:01.877 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.877 [2024-07-14 13:42:39.835945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.136 [2024-07-14 13:42:39.929119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.136 13:42:39 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.137 13:42:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:03.509 13:42:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.509 00:07:03.509 real 0m1.408s 00:07:03.509 user 0m1.264s 00:07:03.509 sys 0m0.145s 00:07:03.509 13:42:41 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.509 13:42:41 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:03.509 ************************************ 00:07:03.509 END TEST accel_crc32c 00:07:03.509 ************************************ 00:07:03.509 13:42:41 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:03.509 13:42:41 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:03.509 13:42:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.509 13:42:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.509 ************************************ 00:07:03.509 START TEST accel_crc32c_C2 00:07:03.509 ************************************ 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:03.509 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:03.510 [2024-07-14 13:42:41.217295] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:03.510 [2024-07-14 13:42:41.217358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334009 ] 00:07:03.510 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.510 [2024-07-14 13:42:41.281171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.510 [2024-07-14 13:42:41.374472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.510 13:42:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.887 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.888 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.888 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.888 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.888 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.888 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.888 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.888 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.888 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:04.888 13:42:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.888 00:07:04.888 real 0m1.405s 00:07:04.888 user 0m1.261s 00:07:04.888 sys 0m0.145s 00:07:04.888 13:42:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.888 13:42:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:04.888 ************************************ 00:07:04.888 END TEST accel_crc32c_C2 00:07:04.888 ************************************ 00:07:04.888 13:42:42 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:04.888 13:42:42 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:04.888 13:42:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.888 13:42:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.888 ************************************ 00:07:04.888 START TEST accel_copy 00:07:04.888 ************************************ 00:07:04.888 13:42:42 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:04.888 13:42:42 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:04.888 [2024-07-14 13:42:42.665855] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:04.888 [2024-07-14 13:42:42.665942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334172 ] 00:07:04.888 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.888 [2024-07-14 13:42:42.730387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.888 [2024-07-14 13:42:42.823154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 13:42:42 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.147 13:42:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:06.082 13:42:44 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.082 00:07:06.082 real 0m1.412s 00:07:06.082 user 0m1.269s 00:07:06.082 sys 0m0.143s 00:07:06.082 13:42:44 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.082 13:42:44 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:06.082 ************************************ 00:07:06.082 END TEST accel_copy 00:07:06.082 ************************************ 00:07:06.343 13:42:44 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.343 13:42:44 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:06.343 13:42:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.343 13:42:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.343 ************************************ 00:07:06.343 START TEST accel_fill 00:07:06.343 ************************************ 00:07:06.343 13:42:44 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:06.343 13:42:44 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:06.343 [2024-07-14 13:42:44.118807] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:06.343 [2024-07-14 13:42:44.118871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334324 ] 00:07:06.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.343 [2024-07-14 13:42:44.181801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.343 [2024-07-14 13:42:44.271679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:06.602 13:42:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:07.540 13:42:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.540 00:07:07.540 real 0m1.393s 00:07:07.540 user 0m1.250s 00:07:07.540 sys 0m0.144s 00:07:07.540 13:42:45 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.540 13:42:45 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:07.540 ************************************ 00:07:07.540 END TEST accel_fill 00:07:07.540 ************************************ 00:07:07.540 13:42:45 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:07.540 13:42:45 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:07.540 13:42:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.540 13:42:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.800 ************************************ 00:07:07.800 START TEST accel_copy_crc32c 00:07:07.800 ************************************ 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:07.800 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:07.800 [2024-07-14 13:42:45.554655] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:07.801 [2024-07-14 13:42:45.554722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334482 ] 00:07:07.801 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.801 [2024-07-14 13:42:45.616335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.801 [2024-07-14 13:42:45.710285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.801 13:42:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.181 00:07:09.181 real 0m1.394s 00:07:09.181 user 0m1.247s 00:07:09.181 sys 0m0.148s 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.181 13:42:46 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:09.181 ************************************ 00:07:09.181 END TEST accel_copy_crc32c 00:07:09.181 ************************************ 00:07:09.181 13:42:46 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:09.181 13:42:46 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:09.181 13:42:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.181 13:42:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.181 ************************************ 00:07:09.181 START TEST accel_copy_crc32c_C2 00:07:09.181 ************************************ 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:09.181 13:42:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:09.181 [2024-07-14 13:42:47.000243] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:09.181 [2024-07-14 13:42:47.000306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334754 ] 00:07:09.181 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.181 [2024-07-14 13:42:47.063541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.181 [2024-07-14 13:42:47.153720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.442 13:42:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.817 00:07:10.817 real 0m1.396s 00:07:10.817 user 0m1.256s 00:07:10.817 sys 0m0.141s 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.817 13:42:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:10.817 ************************************ 00:07:10.817 END TEST accel_copy_crc32c_C2 00:07:10.817 ************************************ 00:07:10.817 13:42:48 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:10.817 13:42:48 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:10.817 13:42:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.817 13:42:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.817 ************************************ 00:07:10.817 START TEST accel_dualcast 00:07:10.817 ************************************ 00:07:10.817 13:42:48 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:10.817 [2024-07-14 13:42:48.437299] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:10.817 [2024-07-14 13:42:48.437363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334912 ] 00:07:10.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.817 [2024-07-14 13:42:48.501776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.817 [2024-07-14 13:42:48.594756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.817 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.818 13:42:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:12.221 13:42:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.221 00:07:12.221 real 0m1.410s 00:07:12.221 user 0m1.258s 00:07:12.221 sys 0m0.153s 00:07:12.221 13:42:49 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.221 13:42:49 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:12.221 ************************************ 00:07:12.221 END TEST accel_dualcast 00:07:12.221 ************************************ 00:07:12.221 13:42:49 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:12.221 13:42:49 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:12.221 13:42:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.221 13:42:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.221 ************************************ 00:07:12.221 START TEST accel_compare 00:07:12.221 ************************************ 00:07:12.221 13:42:49 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:12.221 13:42:49 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:12.221 [2024-07-14 13:42:49.890351] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:12.221 [2024-07-14 13:42:49.890414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335073 ] 00:07:12.221 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.221 [2024-07-14 13:42:49.952787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.221 [2024-07-14 13:42:50.052672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.221 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.222 13:42:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:13.602 13:42:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.602 00:07:13.602 real 0m1.416s 00:07:13.602 user 0m1.263s 00:07:13.602 sys 0m0.154s 00:07:13.602 13:42:51 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.602 13:42:51 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:13.602 ************************************ 00:07:13.602 END TEST accel_compare 00:07:13.602 ************************************ 00:07:13.602 13:42:51 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:13.602 13:42:51 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:13.602 13:42:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.602 13:42:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.602 ************************************ 00:07:13.602 START TEST accel_xor 00:07:13.602 ************************************ 00:07:13.602 13:42:51 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:13.602 [2024-07-14 13:42:51.351153] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:13.602 [2024-07-14 13:42:51.351228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335239 ] 00:07:13.602 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.602 [2024-07-14 13:42:51.416898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.602 [2024-07-14 13:42:51.511044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.602 13:42:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.979 00:07:14.979 real 0m1.413s 00:07:14.979 user 0m1.274s 00:07:14.979 sys 0m0.141s 00:07:14.979 13:42:52 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.979 13:42:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:14.979 ************************************ 00:07:14.979 END TEST accel_xor 00:07:14.979 ************************************ 00:07:14.979 13:42:52 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:14.979 13:42:52 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:14.979 13:42:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.979 13:42:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.979 ************************************ 00:07:14.979 START TEST accel_xor 00:07:14.979 ************************************ 00:07:14.979 13:42:52 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:14.979 13:42:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:14.979 [2024-07-14 13:42:52.805116] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:14.979 [2024-07-14 13:42:52.805189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335499 ] 00:07:14.979 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.979 [2024-07-14 13:42:52.867776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.979 [2024-07-14 13:42:52.958508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 13:42:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:16.651 13:42:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.651 00:07:16.651 real 0m1.400s 00:07:16.651 user 0m1.261s 00:07:16.651 sys 0m0.139s 00:07:16.651 13:42:54 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.651 13:42:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:16.651 ************************************ 00:07:16.651 END TEST accel_xor 00:07:16.651 ************************************ 00:07:16.651 13:42:54 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:16.651 13:42:54 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:16.651 13:42:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.651 13:42:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.651 ************************************ 00:07:16.651 START TEST accel_dif_verify 00:07:16.651 ************************************ 00:07:16.651 13:42:54 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:16.651 13:42:54 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:16.651 [2024-07-14 13:42:54.248356] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:16.651 [2024-07-14 13:42:54.248419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335658 ] 00:07:16.651 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.651 [2024-07-14 13:42:54.310970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.651 [2024-07-14 13:42:54.403671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.652 13:42:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:18.030 13:42:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.030 00:07:18.030 real 0m1.407s 00:07:18.030 user 0m1.264s 00:07:18.030 sys 0m0.146s 00:07:18.030 13:42:55 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.030 13:42:55 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:18.030 ************************************ 00:07:18.030 END TEST accel_dif_verify 00:07:18.030 ************************************ 00:07:18.030 13:42:55 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:18.030 13:42:55 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:18.030 13:42:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.030 13:42:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.030 ************************************ 00:07:18.030 START TEST accel_dif_generate 00:07:18.030 ************************************ 00:07:18.030 13:42:55 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:18.030 [2024-07-14 13:42:55.696648] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:18.030 [2024-07-14 13:42:55.696713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335814 ] 00:07:18.030 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.030 [2024-07-14 13:42:55.758161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.030 [2024-07-14 13:42:55.851471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.030 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:18.031 13:42:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 13:42:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:19.415 13:42:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.415 00:07:19.415 real 0m1.405s 00:07:19.415 user 0m1.262s 00:07:19.415 sys 0m0.146s 00:07:19.415 13:42:57 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.415 13:42:57 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:19.415 ************************************ 00:07:19.415 END TEST accel_dif_generate 00:07:19.415 ************************************ 00:07:19.415 13:42:57 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:19.415 13:42:57 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:19.415 13:42:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.415 13:42:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.415 ************************************ 00:07:19.415 START TEST accel_dif_generate_copy 00:07:19.415 ************************************ 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:19.415 [2024-07-14 13:42:57.151639] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:19.415 [2024-07-14 13:42:57.151704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336089 ] 00:07:19.415 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.415 [2024-07-14 13:42:57.215951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.415 [2024-07-14 13:42:57.308122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.415 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.416 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.416 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.416 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.416 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.416 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.416 13:42:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.797 00:07:20.797 real 0m1.414s 00:07:20.797 user 0m1.271s 00:07:20.797 sys 0m0.144s 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.797 13:42:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:20.797 ************************************ 00:07:20.797 END TEST accel_dif_generate_copy 00:07:20.797 ************************************ 00:07:20.797 13:42:58 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:20.797 13:42:58 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.797 13:42:58 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:20.797 13:42:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.797 13:42:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.797 ************************************ 00:07:20.797 START TEST accel_comp 00:07:20.797 ************************************ 00:07:20.797 13:42:58 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:20.797 13:42:58 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:20.797 [2024-07-14 13:42:58.614480] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:20.797 [2024-07-14 13:42:58.614539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336242 ] 00:07:20.797 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.797 [2024-07-14 13:42:58.683500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.058 [2024-07-14 13:42:58.778568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.058 13:42:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:22.438 13:43:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.438 00:07:22.438 real 0m1.424s 00:07:22.438 user 0m1.274s 00:07:22.438 sys 0m0.150s 00:07:22.438 13:43:00 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.438 13:43:00 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:22.438 ************************************ 00:07:22.438 END TEST accel_comp 00:07:22.438 ************************************ 00:07:22.438 13:43:00 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.438 13:43:00 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:22.438 13:43:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.438 13:43:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.438 ************************************ 00:07:22.438 START TEST accel_decomp 00:07:22.438 ************************************ 00:07:22.438 13:43:00 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.438 13:43:00 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:22.438 13:43:00 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:22.438 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.438 13:43:00 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.438 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.438 13:43:00 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.438 13:43:00 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:22.438 13:43:00 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.438 13:43:00 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:22.439 [2024-07-14 13:43:00.077171] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:22.439 [2024-07-14 13:43:00.077236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336404 ] 00:07:22.439 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.439 [2024-07-14 13:43:00.143246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.439 [2024-07-14 13:43:00.236187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.439 13:43:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.821 13:43:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.821 00:07:23.821 real 0m1.412s 00:07:23.821 user 0m1.260s 00:07:23.821 sys 0m0.154s 00:07:23.821 13:43:01 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.821 13:43:01 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:23.821 ************************************ 00:07:23.821 END TEST accel_decomp 00:07:23.821 ************************************ 00:07:23.821 13:43:01 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.821 13:43:01 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:23.821 13:43:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.821 13:43:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.821 ************************************ 00:07:23.821 START TEST accel_decmop_full 00:07:23.821 ************************************ 00:07:23.821 13:43:01 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:23.821 [2024-07-14 13:43:01.541076] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:23.821 [2024-07-14 13:43:01.541139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336559 ] 00:07:23.821 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.821 [2024-07-14 13:43:01.605899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.821 [2024-07-14 13:43:01.697829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.821 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.822 13:43:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.199 13:43:02 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.199 00:07:25.199 real 0m1.413s 00:07:25.199 user 0m1.268s 00:07:25.199 sys 0m0.147s 00:07:25.199 13:43:02 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.199 13:43:02 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:25.199 ************************************ 00:07:25.199 END TEST accel_decmop_full 00:07:25.199 ************************************ 00:07:25.199 13:43:02 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.199 13:43:02 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:25.199 13:43:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.199 13:43:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.199 ************************************ 00:07:25.199 START TEST accel_decomp_mcore 00:07:25.199 ************************************ 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:25.199 13:43:02 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:25.200 [2024-07-14 13:43:02.995786] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:25.200 [2024-07-14 13:43:02.995847] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336834 ] 00:07:25.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.200 [2024-07-14 13:43:03.058760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.200 [2024-07-14 13:43:03.155174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.200 [2024-07-14 13:43:03.155226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.200 [2024-07-14 13:43:03.155338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.200 [2024-07-14 13:43:03.155341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.458 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.458 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.458 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.458 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.459 13:43:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.836 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.837 00:07:26.837 real 0m1.411s 00:07:26.837 user 0m4.703s 00:07:26.837 sys 0m0.150s 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.837 13:43:04 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:26.837 ************************************ 00:07:26.837 END TEST accel_decomp_mcore 00:07:26.837 ************************************ 00:07:26.837 13:43:04 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.837 13:43:04 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:26.837 13:43:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.837 13:43:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.837 ************************************ 00:07:26.837 START TEST accel_decomp_full_mcore 00:07:26.837 ************************************ 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:26.837 [2024-07-14 13:43:04.448468] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:26.837 [2024-07-14 13:43:04.448528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336992 ] 00:07:26.837 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.837 [2024-07-14 13:43:04.511271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.837 [2024-07-14 13:43:04.607599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.837 [2024-07-14 13:43:04.607667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.837 [2024-07-14 13:43:04.607758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.837 [2024-07-14 13:43:04.607760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.837 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.838 13:43:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.216 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.217 00:07:28.217 real 0m1.431s 00:07:28.217 user 0m4.764s 00:07:28.217 sys 0m0.156s 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.217 13:43:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:28.217 ************************************ 00:07:28.217 END TEST accel_decomp_full_mcore 00:07:28.217 ************************************ 00:07:28.217 13:43:05 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.217 13:43:05 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:28.217 13:43:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.217 13:43:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.217 ************************************ 00:07:28.217 START TEST accel_decomp_mthread 00:07:28.217 ************************************ 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:28.217 13:43:05 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:28.217 [2024-07-14 13:43:05.924431] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:28.217 [2024-07-14 13:43:05.924495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337155 ] 00:07:28.217 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.217 [2024-07-14 13:43:05.990611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.217 [2024-07-14 13:43:06.084419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.217 13:43:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.597 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.598 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.598 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.598 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.598 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.598 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.598 13:43:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.598 00:07:29.598 real 0m1.402s 00:07:29.598 user 0m1.264s 00:07:29.598 sys 0m0.142s 00:07:29.598 13:43:07 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.598 13:43:07 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:29.598 ************************************ 00:07:29.598 END TEST accel_decomp_mthread 00:07:29.598 ************************************ 00:07:29.598 13:43:07 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.598 13:43:07 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:29.598 13:43:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.598 13:43:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.598 ************************************ 00:07:29.598 START TEST accel_decomp_full_mthread 00:07:29.598 ************************************ 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:29.598 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:29.598 [2024-07-14 13:43:07.374156] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:29.598 [2024-07-14 13:43:07.374226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337425 ] 00:07:29.598 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.598 [2024-07-14 13:43:07.437810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.598 [2024-07-14 13:43:07.529916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.857 13:43:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.235 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.236 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.236 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.236 13:43:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.236 00:07:31.236 real 0m1.449s 00:07:31.236 user 0m1.303s 00:07:31.236 sys 0m0.149s 00:07:31.236 13:43:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.236 13:43:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:31.236 ************************************ 00:07:31.236 END TEST accel_decomp_full_mthread 00:07:31.236 ************************************ 00:07:31.236 13:43:08 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:31.236 13:43:08 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:31.236 13:43:08 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:31.236 13:43:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.236 13:43:08 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:31.236 13:43:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.236 13:43:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.236 13:43:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.236 13:43:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.236 13:43:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.236 13:43:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.236 13:43:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:31.236 13:43:08 accel -- accel/accel.sh@41 -- # jq -r . 00:07:31.236 ************************************ 00:07:31.236 START TEST accel_dif_functional_tests 00:07:31.236 ************************************ 00:07:31.236 13:43:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:31.236 [2024-07-14 13:43:08.891634] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:31.236 [2024-07-14 13:43:08.891704] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337580 ] 00:07:31.236 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.236 [2024-07-14 13:43:08.959458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.236 [2024-07-14 13:43:09.052999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.236 [2024-07-14 13:43:09.053068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.236 [2024-07-14 13:43:09.053071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.236 00:07:31.236 00:07:31.236 CUnit - A unit testing framework for C - Version 2.1-3 00:07:31.236 http://cunit.sourceforge.net/ 00:07:31.236 00:07:31.236 00:07:31.236 Suite: accel_dif 00:07:31.236 Test: verify: DIF generated, GUARD check ...passed 00:07:31.236 Test: verify: DIF generated, APPTAG check ...passed 00:07:31.236 Test: verify: DIF generated, REFTAG check ...passed 00:07:31.236 Test: verify: DIF not generated, GUARD check ...[2024-07-14 13:43:09.146888] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:31.236 passed 00:07:31.236 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 13:43:09.146961] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:31.236 passed 00:07:31.236 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 13:43:09.146994] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:31.236 passed 00:07:31.236 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:31.236 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 13:43:09.147055] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:31.236 passed 00:07:31.236 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:31.236 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:31.236 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:31.236 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 13:43:09.147198] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:31.236 passed 00:07:31.236 Test: verify copy: DIF generated, GUARD check ...passed 00:07:31.236 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:31.236 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:31.236 Test: verify copy: DIF not generated, GUARD check ...[2024-07-14 13:43:09.147341] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:31.236 passed 00:07:31.236 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 13:43:09.147375] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:31.236 passed 00:07:31.236 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-14 13:43:09.147405] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:31.236 passed 00:07:31.236 Test: generate copy: DIF generated, GUARD check ...passed 00:07:31.236 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:31.236 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:31.236 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:31.236 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:31.236 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:31.236 Test: generate copy: iovecs-len validate ...[2024-07-14 13:43:09.147609] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:31.236 passed 00:07:31.236 Test: generate copy: buffer alignment validate ...passed 00:07:31.236 00:07:31.236 Run Summary: Type Total Ran Passed Failed Inactive 00:07:31.236 suites 1 1 n/a 0 0 00:07:31.236 tests 26 26 26 0 0 00:07:31.236 asserts 115 115 115 0 n/a 00:07:31.236 00:07:31.236 Elapsed time = 0.002 seconds 00:07:31.493 00:07:31.493 real 0m0.514s 00:07:31.493 user 0m0.792s 00:07:31.493 sys 0m0.186s 00:07:31.493 13:43:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.493 13:43:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:31.493 ************************************ 00:07:31.493 END TEST accel_dif_functional_tests 00:07:31.493 ************************************ 00:07:31.493 00:07:31.493 real 0m31.733s 00:07:31.493 user 0m35.088s 00:07:31.493 sys 0m4.619s 00:07:31.494 13:43:09 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.494 13:43:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.494 ************************************ 00:07:31.494 END TEST accel 00:07:31.494 ************************************ 00:07:31.494 13:43:09 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:31.494 13:43:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:31.494 13:43:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.494 13:43:09 -- common/autotest_common.sh@10 -- # set +x 00:07:31.494 ************************************ 00:07:31.494 START TEST accel_rpc 00:07:31.494 ************************************ 00:07:31.494 13:43:09 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:31.752 * Looking for test storage... 00:07:31.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:31.752 13:43:09 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:31.752 13:43:09 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1337693 00:07:31.752 13:43:09 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:31.752 13:43:09 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1337693 00:07:31.752 13:43:09 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 1337693 ']' 00:07:31.752 13:43:09 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.752 13:43:09 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:31.752 13:43:09 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.752 13:43:09 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:31.752 13:43:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.752 [2024-07-14 13:43:09.543099] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:31.752 [2024-07-14 13:43:09.543216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337693 ] 00:07:31.752 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.752 [2024-07-14 13:43:09.604788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.752 [2024-07-14 13:43:09.688641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.012 13:43:09 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:32.012 13:43:09 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:32.012 13:43:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:32.012 13:43:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:32.012 13:43:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:32.012 13:43:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:32.012 13:43:09 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:32.012 13:43:09 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:32.012 13:43:09 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.012 13:43:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.012 ************************************ 00:07:32.012 START TEST accel_assign_opcode 00:07:32.012 ************************************ 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:32.012 [2024-07-14 13:43:09.777392] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:32.012 [2024-07-14 13:43:09.785394] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.012 13:43:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:32.273 13:43:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.273 13:43:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:32.273 13:43:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:32.273 13:43:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.273 13:43:10 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:32.273 13:43:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:32.273 13:43:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.273 software 00:07:32.273 00:07:32.273 real 0m0.295s 00:07:32.273 user 0m0.039s 00:07:32.273 sys 0m0.007s 00:07:32.273 13:43:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.273 13:43:10 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:32.273 ************************************ 00:07:32.273 END TEST accel_assign_opcode 00:07:32.273 ************************************ 00:07:32.273 13:43:10 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1337693 00:07:32.273 13:43:10 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 1337693 ']' 00:07:32.273 13:43:10 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 1337693 00:07:32.273 13:43:10 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:32.273 13:43:10 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:32.273 13:43:10 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1337693 00:07:32.273 13:43:10 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:32.273 13:43:10 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:32.273 13:43:10 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1337693' 00:07:32.273 killing process with pid 1337693 00:07:32.273 13:43:10 accel_rpc -- common/autotest_common.sh@965 -- # kill 1337693 00:07:32.273 13:43:10 accel_rpc -- common/autotest_common.sh@970 -- # wait 1337693 00:07:32.842 00:07:32.842 real 0m1.101s 00:07:32.842 user 0m1.017s 00:07:32.842 sys 0m0.441s 00:07:32.842 13:43:10 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.842 13:43:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.842 ************************************ 00:07:32.842 END TEST accel_rpc 00:07:32.842 ************************************ 00:07:32.842 13:43:10 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:32.842 13:43:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:32.842 13:43:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.842 13:43:10 -- common/autotest_common.sh@10 -- # set +x 00:07:32.842 ************************************ 00:07:32.842 START TEST app_cmdline 00:07:32.842 ************************************ 00:07:32.842 13:43:10 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:32.842 * Looking for test storage... 00:07:32.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:32.842 13:43:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:32.842 13:43:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1337926 00:07:32.842 13:43:10 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:32.842 13:43:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1337926 00:07:32.842 13:43:10 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 1337926 ']' 00:07:32.842 13:43:10 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.842 13:43:10 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:32.842 13:43:10 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.842 13:43:10 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:32.842 13:43:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:32.842 [2024-07-14 13:43:10.688336] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:32.842 [2024-07-14 13:43:10.688431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337926 ] 00:07:32.842 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.842 [2024-07-14 13:43:10.749105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.101 [2024-07-14 13:43:10.836540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.360 13:43:11 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:33.360 13:43:11 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:33.360 13:43:11 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:33.360 { 00:07:33.360 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:33.360 "fields": { 00:07:33.360 "major": 24, 00:07:33.360 "minor": 5, 00:07:33.360 "patch": 1, 00:07:33.360 "suffix": "-pre", 00:07:33.360 "commit": "5fa2f5086" 00:07:33.360 } 00:07:33.360 } 00:07:33.360 13:43:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:33.360 13:43:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:33.360 13:43:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:33.360 13:43:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:33.360 13:43:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:33.360 13:43:11 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.360 13:43:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.360 13:43:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:33.360 13:43:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.619 13:43:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:33.619 13:43:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:33.619 13:43:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:33.619 13:43:11 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:33.877 request: 00:07:33.877 { 00:07:33.877 "method": "env_dpdk_get_mem_stats", 00:07:33.877 "req_id": 1 00:07:33.877 } 00:07:33.877 Got JSON-RPC error response 00:07:33.877 response: 00:07:33.877 { 00:07:33.877 "code": -32601, 00:07:33.877 "message": "Method not found" 00:07:33.877 } 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.877 13:43:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1337926 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 1337926 ']' 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 1337926 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1337926 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1337926' 00:07:33.877 killing process with pid 1337926 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@965 -- # kill 1337926 00:07:33.877 13:43:11 app_cmdline -- common/autotest_common.sh@970 -- # wait 1337926 00:07:34.136 00:07:34.136 real 0m1.448s 00:07:34.136 user 0m1.764s 00:07:34.136 sys 0m0.460s 00:07:34.136 13:43:12 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.136 13:43:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.136 ************************************ 00:07:34.136 END TEST app_cmdline 00:07:34.136 ************************************ 00:07:34.136 13:43:12 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:34.136 13:43:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:34.136 13:43:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.136 13:43:12 -- common/autotest_common.sh@10 -- # set +x 00:07:34.136 ************************************ 00:07:34.136 START TEST version 00:07:34.136 ************************************ 00:07:34.136 13:43:12 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:34.395 * Looking for test storage... 00:07:34.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:34.395 13:43:12 version -- app/version.sh@17 -- # get_header_version major 00:07:34.395 13:43:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:34.395 13:43:12 version -- app/version.sh@14 -- # cut -f2 00:07:34.395 13:43:12 version -- app/version.sh@14 -- # tr -d '"' 00:07:34.395 13:43:12 version -- app/version.sh@17 -- # major=24 00:07:34.395 13:43:12 version -- app/version.sh@18 -- # get_header_version minor 00:07:34.396 13:43:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:34.396 13:43:12 version -- app/version.sh@14 -- # cut -f2 00:07:34.396 13:43:12 version -- app/version.sh@14 -- # tr -d '"' 00:07:34.396 13:43:12 version -- app/version.sh@18 -- # minor=5 00:07:34.396 13:43:12 version -- app/version.sh@19 -- # get_header_version patch 00:07:34.396 13:43:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:34.396 13:43:12 version -- app/version.sh@14 -- # cut -f2 00:07:34.396 13:43:12 version -- app/version.sh@14 -- # tr -d '"' 00:07:34.396 13:43:12 version -- app/version.sh@19 -- # patch=1 00:07:34.396 13:43:12 version -- app/version.sh@20 -- # get_header_version suffix 00:07:34.396 13:43:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:34.396 13:43:12 version -- app/version.sh@14 -- # cut -f2 00:07:34.396 13:43:12 version -- app/version.sh@14 -- # tr -d '"' 00:07:34.396 13:43:12 version -- app/version.sh@20 -- # suffix=-pre 00:07:34.396 13:43:12 version -- app/version.sh@22 -- # version=24.5 00:07:34.396 13:43:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:34.396 13:43:12 version -- app/version.sh@25 -- # version=24.5.1 00:07:34.396 13:43:12 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:34.396 13:43:12 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:34.396 13:43:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:34.396 13:43:12 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:34.396 13:43:12 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:34.396 00:07:34.396 real 0m0.108s 00:07:34.396 user 0m0.061s 00:07:34.396 sys 0m0.069s 00:07:34.396 13:43:12 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.396 13:43:12 version -- common/autotest_common.sh@10 -- # set +x 00:07:34.396 ************************************ 00:07:34.396 END TEST version 00:07:34.396 ************************************ 00:07:34.396 13:43:12 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:34.396 13:43:12 -- spdk/autotest.sh@198 -- # uname -s 00:07:34.396 13:43:12 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:34.396 13:43:12 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:34.396 13:43:12 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:34.396 13:43:12 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:34.396 13:43:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:34.396 13:43:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:34.396 13:43:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.396 13:43:12 -- common/autotest_common.sh@10 -- # set +x 00:07:34.396 13:43:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:34.396 13:43:12 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:34.396 13:43:12 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:34.396 13:43:12 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:34.396 13:43:12 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:34.396 13:43:12 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:34.396 13:43:12 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:34.396 13:43:12 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:34.396 13:43:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.396 13:43:12 -- common/autotest_common.sh@10 -- # set +x 00:07:34.396 ************************************ 00:07:34.396 START TEST nvmf_tcp 00:07:34.396 ************************************ 00:07:34.396 13:43:12 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:34.396 * Looking for test storage... 00:07:34.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.396 13:43:12 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.396 13:43:12 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.396 13:43:12 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.396 13:43:12 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.396 13:43:12 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.396 13:43:12 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.396 13:43:12 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:34.396 13:43:12 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:34.396 13:43:12 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:34.396 13:43:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:34.396 13:43:12 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:34.396 13:43:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:34.396 13:43:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.396 13:43:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:34.396 ************************************ 00:07:34.396 START TEST nvmf_example 00:07:34.396 ************************************ 00:07:34.396 13:43:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:34.656 * Looking for test storage... 00:07:34.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.656 13:43:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:36.598 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:36.598 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:36.598 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:36.598 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:36.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:07:36.598 00:07:36.598 --- 10.0.0.2 ping statistics --- 00:07:36.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.598 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:07:36.598 00:07:36.598 --- 10.0.0.1 ping statistics --- 00:07:36.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.598 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:36.598 13:43:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1339873 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1339873 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 1339873 ']' 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:36.599 13:43:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:36.599 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.534 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.791 13:43:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.791 13:43:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:37.791 13:43:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:37.791 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.764 Initializing NVMe Controllers 00:07:47.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:47.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:47.764 Initialization complete. Launching workers. 00:07:47.764 ======================================================== 00:07:47.764 Latency(us) 00:07:47.764 Device Information : IOPS MiB/s Average min max 00:07:47.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14464.62 56.50 4426.63 887.70 16182.65 00:07:47.764 ======================================================== 00:07:47.764 Total : 14464.62 56.50 4426.63 887.70 16182.65 00:07:47.764 00:07:47.764 13:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:47.764 13:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:47.764 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:47.764 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:47.764 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:47.764 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:47.764 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:47.764 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:47.764 rmmod nvme_tcp 00:07:47.764 rmmod nvme_fabrics 00:07:47.764 rmmod nvme_keyring 00:07:47.764 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1339873 ']' 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1339873 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 1339873 ']' 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 1339873 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1339873 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1339873' 00:07:48.023 killing process with pid 1339873 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 1339873 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 1339873 00:07:48.023 nvmf threads initialize successfully 00:07:48.023 bdev subsystem init successfully 00:07:48.023 created a nvmf target service 00:07:48.023 create targets's poll groups done 00:07:48.023 all subsystems of target started 00:07:48.023 nvmf target is running 00:07:48.023 all subsystems of target stopped 00:07:48.023 destroy targets's poll groups done 00:07:48.023 destroyed the nvmf target service 00:07:48.023 bdev subsystem finish successfully 00:07:48.023 nvmf threads destroy successfully 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.023 13:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.562 13:43:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:50.562 13:43:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:50.562 13:43:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.562 13:43:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.562 00:07:50.562 real 0m15.711s 00:07:50.562 user 0m43.586s 00:07:50.562 sys 0m3.786s 00:07:50.562 13:43:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.562 13:43:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.562 ************************************ 00:07:50.562 END TEST nvmf_example 00:07:50.562 ************************************ 00:07:50.562 13:43:28 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:50.562 13:43:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:50.562 13:43:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.562 13:43:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.562 ************************************ 00:07:50.562 START TEST nvmf_filesystem 00:07:50.562 ************************************ 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:50.562 * Looking for test storage... 00:07:50.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:50.562 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:50.563 #define SPDK_CONFIG_H 00:07:50.563 #define SPDK_CONFIG_APPS 1 00:07:50.563 #define SPDK_CONFIG_ARCH native 00:07:50.563 #undef SPDK_CONFIG_ASAN 00:07:50.563 #undef SPDK_CONFIG_AVAHI 00:07:50.563 #undef SPDK_CONFIG_CET 00:07:50.563 #define SPDK_CONFIG_COVERAGE 1 00:07:50.563 #define SPDK_CONFIG_CROSS_PREFIX 00:07:50.563 #undef SPDK_CONFIG_CRYPTO 00:07:50.563 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:50.563 #undef SPDK_CONFIG_CUSTOMOCF 00:07:50.563 #undef SPDK_CONFIG_DAOS 00:07:50.563 #define SPDK_CONFIG_DAOS_DIR 00:07:50.563 #define SPDK_CONFIG_DEBUG 1 00:07:50.563 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:50.563 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:50.563 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:50.563 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:50.563 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:50.563 #undef SPDK_CONFIG_DPDK_UADK 00:07:50.563 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:50.563 #define SPDK_CONFIG_EXAMPLES 1 00:07:50.563 #undef SPDK_CONFIG_FC 00:07:50.563 #define SPDK_CONFIG_FC_PATH 00:07:50.563 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:50.563 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:50.563 #undef SPDK_CONFIG_FUSE 00:07:50.563 #undef SPDK_CONFIG_FUZZER 00:07:50.563 #define SPDK_CONFIG_FUZZER_LIB 00:07:50.563 #undef SPDK_CONFIG_GOLANG 00:07:50.563 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:50.563 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:50.563 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:50.563 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:50.563 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:50.563 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:50.563 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:50.563 #define SPDK_CONFIG_IDXD 1 00:07:50.563 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:50.563 #undef SPDK_CONFIG_IPSEC_MB 00:07:50.563 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:50.563 #define SPDK_CONFIG_ISAL 1 00:07:50.563 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:50.563 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:50.563 #define SPDK_CONFIG_LIBDIR 00:07:50.563 #undef SPDK_CONFIG_LTO 00:07:50.563 #define SPDK_CONFIG_MAX_LCORES 00:07:50.563 #define SPDK_CONFIG_NVME_CUSE 1 00:07:50.563 #undef SPDK_CONFIG_OCF 00:07:50.563 #define SPDK_CONFIG_OCF_PATH 00:07:50.563 #define SPDK_CONFIG_OPENSSL_PATH 00:07:50.563 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:50.563 #define SPDK_CONFIG_PGO_DIR 00:07:50.563 #undef SPDK_CONFIG_PGO_USE 00:07:50.563 #define SPDK_CONFIG_PREFIX /usr/local 00:07:50.563 #undef SPDK_CONFIG_RAID5F 00:07:50.563 #undef SPDK_CONFIG_RBD 00:07:50.563 #define SPDK_CONFIG_RDMA 1 00:07:50.563 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:50.563 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:50.563 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:50.563 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:50.563 #define SPDK_CONFIG_SHARED 1 00:07:50.563 #undef SPDK_CONFIG_SMA 00:07:50.563 #define SPDK_CONFIG_TESTS 1 00:07:50.563 #undef SPDK_CONFIG_TSAN 00:07:50.563 #define SPDK_CONFIG_UBLK 1 00:07:50.563 #define SPDK_CONFIG_UBSAN 1 00:07:50.563 #undef SPDK_CONFIG_UNIT_TESTS 00:07:50.563 #undef SPDK_CONFIG_URING 00:07:50.563 #define SPDK_CONFIG_URING_PATH 00:07:50.563 #undef SPDK_CONFIG_URING_ZNS 00:07:50.563 #undef SPDK_CONFIG_USDT 00:07:50.563 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:50.563 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:50.563 #define SPDK_CONFIG_VFIO_USER 1 00:07:50.563 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:50.563 #define SPDK_CONFIG_VHOST 1 00:07:50.563 #define SPDK_CONFIG_VIRTIO 1 00:07:50.563 #undef SPDK_CONFIG_VTUNE 00:07:50.563 #define SPDK_CONFIG_VTUNE_DIR 00:07:50.563 #define SPDK_CONFIG_WERROR 1 00:07:50.563 #define SPDK_CONFIG_WPDK_DIR 00:07:50.563 #undef SPDK_CONFIG_XNVME 00:07:50.563 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:50.563 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v23.11 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.564 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 1341581 ]] 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 1341581 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.kWw1f6 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.kWw1f6/tests/target /tmp/spdk.kWw1f6 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=53463912448 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994692608 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8530780160 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30993969152 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997344256 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390178816 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398940160 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30997000192 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997348352 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=348160 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:50.565 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:50.566 * Looking for test storage... 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=53463912448 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=10745372672 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.566 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.567 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.567 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.567 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:50.567 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:50.567 13:43:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:50.567 13:43:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:52.469 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:52.469 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:52.469 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.469 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:52.470 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:07:52.470 00:07:52.470 --- 10.0.0.2 ping statistics --- 00:07:52.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.470 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:07:52.470 00:07:52.470 --- 10.0.0.1 ping statistics --- 00:07:52.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.470 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.470 ************************************ 00:07:52.470 START TEST nvmf_filesystem_no_in_capsule 00:07:52.470 ************************************ 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1343211 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1343211 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1343211 ']' 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.470 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.470 [2024-07-14 13:43:30.430772] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:52.470 [2024-07-14 13:43:30.430852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.728 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.728 [2024-07-14 13:43:30.499929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.728 [2024-07-14 13:43:30.594719] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.728 [2024-07-14 13:43:30.594774] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.728 [2024-07-14 13:43:30.594787] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.728 [2024-07-14 13:43:30.594798] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.728 [2024-07-14 13:43:30.594807] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.728 [2024-07-14 13:43:30.594902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.728 [2024-07-14 13:43:30.594961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.728 [2024-07-14 13:43:30.595026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.728 [2024-07-14 13:43:30.595029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.987 [2024-07-14 13:43:30.746504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.987 Malloc1 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.987 [2024-07-14 13:43:30.930280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:52.987 { 00:07:52.987 "name": "Malloc1", 00:07:52.987 "aliases": [ 00:07:52.987 "3cc6f028-9867-4a0e-bd16-68577483df3d" 00:07:52.987 ], 00:07:52.987 "product_name": "Malloc disk", 00:07:52.987 "block_size": 512, 00:07:52.987 "num_blocks": 1048576, 00:07:52.987 "uuid": "3cc6f028-9867-4a0e-bd16-68577483df3d", 00:07:52.987 "assigned_rate_limits": { 00:07:52.987 "rw_ios_per_sec": 0, 00:07:52.987 "rw_mbytes_per_sec": 0, 00:07:52.987 "r_mbytes_per_sec": 0, 00:07:52.987 "w_mbytes_per_sec": 0 00:07:52.987 }, 00:07:52.987 "claimed": true, 00:07:52.987 "claim_type": "exclusive_write", 00:07:52.987 "zoned": false, 00:07:52.987 "supported_io_types": { 00:07:52.987 "read": true, 00:07:52.987 "write": true, 00:07:52.987 "unmap": true, 00:07:52.987 "write_zeroes": true, 00:07:52.987 "flush": true, 00:07:52.987 "reset": true, 00:07:52.987 "compare": false, 00:07:52.987 "compare_and_write": false, 00:07:52.987 "abort": true, 00:07:52.987 "nvme_admin": false, 00:07:52.987 "nvme_io": false 00:07:52.987 }, 00:07:52.987 "memory_domains": [ 00:07:52.987 { 00:07:52.987 "dma_device_id": "system", 00:07:52.987 "dma_device_type": 1 00:07:52.987 }, 00:07:52.987 { 00:07:52.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.987 "dma_device_type": 2 00:07:52.987 } 00:07:52.987 ], 00:07:52.987 "driver_specific": {} 00:07:52.987 } 00:07:52.987 ]' 00:07:52.987 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:53.245 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:53.245 13:43:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:53.245 13:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:53.245 13:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:53.245 13:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:53.245 13:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:53.245 13:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.815 13:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:53.815 13:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:53.815 13:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:53.815 13:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:53.815 13:43:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:55.722 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:55.979 13:43:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:56.545 13:43:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.483 ************************************ 00:07:57.483 START TEST filesystem_ext4 00:07:57.483 ************************************ 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:57.483 13:43:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:57.483 mke2fs 1.46.5 (30-Dec-2021) 00:07:57.743 Discarding device blocks: 0/522240 done 00:07:57.743 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:57.743 Filesystem UUID: a5dc70c1-7e48-4b3a-bb11-c6b6f4ffe171 00:07:57.743 Superblock backups stored on blocks: 00:07:57.743 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:57.743 00:07:57.743 Allocating group tables: 0/64 done 00:07:57.743 Writing inode tables: 0/64 done 00:07:58.003 Creating journal (8192 blocks): done 00:07:58.827 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:07:58.827 00:07:58.827 13:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:58.827 13:43:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1343211 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.396 00:07:59.396 real 0m1.929s 00:07:59.396 user 0m0.026s 00:07:59.396 sys 0m0.050s 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:59.396 ************************************ 00:07:59.396 END TEST filesystem_ext4 00:07:59.396 ************************************ 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.396 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.655 ************************************ 00:07:59.655 START TEST filesystem_btrfs 00:07:59.655 ************************************ 00:07:59.655 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:59.655 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:59.655 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.655 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:59.655 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:59.655 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:59.655 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:59.655 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:59.655 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:59.655 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:59.655 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:59.914 btrfs-progs v6.6.2 00:07:59.914 See https://btrfs.readthedocs.io for more information. 00:07:59.914 00:07:59.914 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:59.914 NOTE: several default settings have changed in version 5.15, please make sure 00:07:59.914 this does not affect your deployments: 00:07:59.914 - DUP for metadata (-m dup) 00:07:59.914 - enabled no-holes (-O no-holes) 00:07:59.914 - enabled free-space-tree (-R free-space-tree) 00:07:59.914 00:07:59.914 Label: (null) 00:07:59.914 UUID: 9eb4439d-020e-4e67-b229-0663fd840145 00:07:59.914 Node size: 16384 00:07:59.914 Sector size: 4096 00:07:59.914 Filesystem size: 510.00MiB 00:07:59.914 Block group profiles: 00:07:59.914 Data: single 8.00MiB 00:07:59.914 Metadata: DUP 32.00MiB 00:07:59.914 System: DUP 8.00MiB 00:07:59.914 SSD detected: yes 00:07:59.914 Zoned device: no 00:07:59.914 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:59.914 Runtime features: free-space-tree 00:07:59.914 Checksum: crc32c 00:07:59.914 Number of devices: 1 00:07:59.914 Devices: 00:07:59.914 ID SIZE PATH 00:07:59.914 1 510.00MiB /dev/nvme0n1p1 00:07:59.914 00:07:59.914 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:59.914 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1343211 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:00.172 00:08:00.172 real 0m0.593s 00:08:00.172 user 0m0.010s 00:08:00.172 sys 0m0.115s 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.172 13:43:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:00.172 ************************************ 00:08:00.172 END TEST filesystem_btrfs 00:08:00.172 ************************************ 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.172 ************************************ 00:08:00.172 START TEST filesystem_xfs 00:08:00.172 ************************************ 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:00.172 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:00.172 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:00.172 = sectsz=512 attr=2, projid32bit=1 00:08:00.172 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:00.172 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:00.172 data = bsize=4096 blocks=130560, imaxpct=25 00:08:00.172 = sunit=0 swidth=0 blks 00:08:00.172 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:00.172 log =internal log bsize=4096 blocks=16384, version=2 00:08:00.172 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:00.172 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:01.107 Discarding blocks...Done. 00:08:01.108 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:01.108 13:43:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1343211 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.683 00:08:03.683 real 0m3.309s 00:08:03.683 user 0m0.020s 00:08:03.683 sys 0m0.055s 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:03.683 ************************************ 00:08:03.683 END TEST filesystem_xfs 00:08:03.683 ************************************ 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:03.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:03.683 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1343211 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1343211 ']' 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1343211 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1343211 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1343211' 00:08:03.942 killing process with pid 1343211 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 1343211 00:08:03.942 13:43:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 1343211 00:08:04.200 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:04.200 00:08:04.200 real 0m11.777s 00:08:04.201 user 0m45.212s 00:08:04.201 sys 0m1.746s 00:08:04.201 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.201 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.201 ************************************ 00:08:04.201 END TEST nvmf_filesystem_no_in_capsule 00:08:04.201 ************************************ 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.460 ************************************ 00:08:04.460 START TEST nvmf_filesystem_in_capsule 00:08:04.460 ************************************ 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1344778 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1344778 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1344778 ']' 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:04.460 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.460 [2024-07-14 13:43:42.262040] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:04.460 [2024-07-14 13:43:42.262114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.460 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.460 [2024-07-14 13:43:42.328260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.460 [2024-07-14 13:43:42.417585] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.460 [2024-07-14 13:43:42.417643] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.460 [2024-07-14 13:43:42.417656] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.460 [2024-07-14 13:43:42.417667] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.460 [2024-07-14 13:43:42.417676] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.460 [2024-07-14 13:43:42.417757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.460 [2024-07-14 13:43:42.417824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.460 [2024-07-14 13:43:42.417896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.460 [2024-07-14 13:43:42.417900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.719 [2024-07-14 13:43:42.577699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.719 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.979 Malloc1 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.979 [2024-07-14 13:43:42.751678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:04.979 { 00:08:04.979 "name": "Malloc1", 00:08:04.979 "aliases": [ 00:08:04.979 "70c59787-6f60-451c-b1f0-db4f4b4c613b" 00:08:04.979 ], 00:08:04.979 "product_name": "Malloc disk", 00:08:04.979 "block_size": 512, 00:08:04.979 "num_blocks": 1048576, 00:08:04.979 "uuid": "70c59787-6f60-451c-b1f0-db4f4b4c613b", 00:08:04.979 "assigned_rate_limits": { 00:08:04.979 "rw_ios_per_sec": 0, 00:08:04.979 "rw_mbytes_per_sec": 0, 00:08:04.979 "r_mbytes_per_sec": 0, 00:08:04.979 "w_mbytes_per_sec": 0 00:08:04.979 }, 00:08:04.979 "claimed": true, 00:08:04.979 "claim_type": "exclusive_write", 00:08:04.979 "zoned": false, 00:08:04.979 "supported_io_types": { 00:08:04.979 "read": true, 00:08:04.979 "write": true, 00:08:04.979 "unmap": true, 00:08:04.979 "write_zeroes": true, 00:08:04.979 "flush": true, 00:08:04.979 "reset": true, 00:08:04.979 "compare": false, 00:08:04.979 "compare_and_write": false, 00:08:04.979 "abort": true, 00:08:04.979 "nvme_admin": false, 00:08:04.979 "nvme_io": false 00:08:04.979 }, 00:08:04.979 "memory_domains": [ 00:08:04.979 { 00:08:04.979 "dma_device_id": "system", 00:08:04.979 "dma_device_type": 1 00:08:04.979 }, 00:08:04.979 { 00:08:04.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.979 "dma_device_type": 2 00:08:04.979 } 00:08:04.979 ], 00:08:04.979 "driver_specific": {} 00:08:04.979 } 00:08:04.979 ]' 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:04.979 13:43:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:05.545 13:43:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:05.545 13:43:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:05.545 13:43:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:05.545 13:43:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:05.545 13:43:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:08.081 13:43:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:09.021 13:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.958 ************************************ 00:08:09.958 START TEST filesystem_in_capsule_ext4 00:08:09.958 ************************************ 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:09.958 13:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:09.958 mke2fs 1.46.5 (30-Dec-2021) 00:08:09.958 Discarding device blocks: 0/522240 done 00:08:09.958 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:09.958 Filesystem UUID: 627ef668-4aac-4275-9bc1-15787adeb33f 00:08:09.958 Superblock backups stored on blocks: 00:08:09.958 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:09.958 00:08:09.958 Allocating group tables: 0/64 done 00:08:09.958 Writing inode tables: 0/64 done 00:08:10.216 Creating journal (8192 blocks): done 00:08:11.037 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:08:11.037 00:08:11.037 13:43:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:11.037 13:43:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:11.601 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:11.601 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1344778 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:11.859 00:08:11.859 real 0m1.928s 00:08:11.859 user 0m0.018s 00:08:11.859 sys 0m0.052s 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:11.859 ************************************ 00:08:11.859 END TEST filesystem_in_capsule_ext4 00:08:11.859 ************************************ 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.859 ************************************ 00:08:11.859 START TEST filesystem_in_capsule_btrfs 00:08:11.859 ************************************ 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:11.859 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:12.117 btrfs-progs v6.6.2 00:08:12.117 See https://btrfs.readthedocs.io for more information. 00:08:12.117 00:08:12.117 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:12.117 NOTE: several default settings have changed in version 5.15, please make sure 00:08:12.117 this does not affect your deployments: 00:08:12.117 - DUP for metadata (-m dup) 00:08:12.117 - enabled no-holes (-O no-holes) 00:08:12.117 - enabled free-space-tree (-R free-space-tree) 00:08:12.117 00:08:12.117 Label: (null) 00:08:12.117 UUID: 46187e33-25aa-48fb-ae8e-5c2faee193ec 00:08:12.117 Node size: 16384 00:08:12.117 Sector size: 4096 00:08:12.117 Filesystem size: 510.00MiB 00:08:12.117 Block group profiles: 00:08:12.117 Data: single 8.00MiB 00:08:12.117 Metadata: DUP 32.00MiB 00:08:12.117 System: DUP 8.00MiB 00:08:12.117 SSD detected: yes 00:08:12.117 Zoned device: no 00:08:12.117 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:12.117 Runtime features: free-space-tree 00:08:12.117 Checksum: crc32c 00:08:12.117 Number of devices: 1 00:08:12.117 Devices: 00:08:12.117 ID SIZE PATH 00:08:12.117 1 510.00MiB /dev/nvme0n1p1 00:08:12.117 00:08:12.117 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:12.117 13:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1344778 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.682 00:08:12.682 real 0m0.766s 00:08:12.682 user 0m0.028s 00:08:12.682 sys 0m0.112s 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:12.682 ************************************ 00:08:12.682 END TEST filesystem_in_capsule_btrfs 00:08:12.682 ************************************ 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.682 ************************************ 00:08:12.682 START TEST filesystem_in_capsule_xfs 00:08:12.682 ************************************ 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:12.682 13:43:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:12.682 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:12.682 = sectsz=512 attr=2, projid32bit=1 00:08:12.682 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:12.682 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:12.682 data = bsize=4096 blocks=130560, imaxpct=25 00:08:12.682 = sunit=0 swidth=0 blks 00:08:12.682 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:12.682 log =internal log bsize=4096 blocks=16384, version=2 00:08:12.682 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:12.682 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:13.615 Discarding blocks...Done. 00:08:13.615 13:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:13.615 13:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1344778 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:16.139 00:08:16.139 real 0m3.394s 00:08:16.139 user 0m0.018s 00:08:16.139 sys 0m0.052s 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:16.139 ************************************ 00:08:16.139 END TEST filesystem_in_capsule_xfs 00:08:16.139 ************************************ 00:08:16.139 13:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:16.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1344778 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1344778 ']' 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1344778 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1344778 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1344778' 00:08:16.397 killing process with pid 1344778 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 1344778 00:08:16.397 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 1344778 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:16.964 00:08:16.964 real 0m12.514s 00:08:16.964 user 0m48.185s 00:08:16.964 sys 0m1.747s 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.964 ************************************ 00:08:16.964 END TEST nvmf_filesystem_in_capsule 00:08:16.964 ************************************ 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.964 rmmod nvme_tcp 00:08:16.964 rmmod nvme_fabrics 00:08:16.964 rmmod nvme_keyring 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.964 13:43:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.867 13:43:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.126 00:08:19.126 real 0m28.745s 00:08:19.126 user 1m34.270s 00:08:19.126 sys 0m5.075s 00:08:19.126 13:43:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:19.126 13:43:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.126 ************************************ 00:08:19.126 END TEST nvmf_filesystem 00:08:19.126 ************************************ 00:08:19.126 13:43:56 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:19.126 13:43:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:19.126 13:43:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.126 13:43:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.126 ************************************ 00:08:19.126 START TEST nvmf_target_discovery 00:08:19.126 ************************************ 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:19.126 * Looking for test storage... 00:08:19.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:19.126 13:43:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.127 13:43:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:21.027 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:21.027 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:21.027 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:21.027 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.027 13:43:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.027 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:08:21.285 00:08:21.285 --- 10.0.0.2 ping statistics --- 00:08:21.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.285 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:08:21.285 00:08:21.285 --- 10.0.0.1 ping statistics --- 00:08:21.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.285 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1348379 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1348379 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 1348379 ']' 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.285 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:21.286 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.286 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:21.286 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.286 [2024-07-14 13:43:59.135368] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:21.286 [2024-07-14 13:43:59.135461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.286 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.286 [2024-07-14 13:43:59.215036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.543 [2024-07-14 13:43:59.310402] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.543 [2024-07-14 13:43:59.310460] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.543 [2024-07-14 13:43:59.310476] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.543 [2024-07-14 13:43:59.310498] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.543 [2024-07-14 13:43:59.310510] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.543 [2024-07-14 13:43:59.310591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.543 [2024-07-14 13:43:59.310658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.543 [2024-07-14 13:43:59.310680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.543 [2024-07-14 13:43:59.310682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.543 [2024-07-14 13:43:59.468652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:21.543 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.544 Null1 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.544 [2024-07-14 13:43:59.509027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.544 Null2 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.544 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 Null3 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 Null4 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.801 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:22.059 00:08:22.059 Discovery Log Number of Records 6, Generation counter 6 00:08:22.059 =====Discovery Log Entry 0====== 00:08:22.059 trtype: tcp 00:08:22.059 adrfam: ipv4 00:08:22.059 subtype: current discovery subsystem 00:08:22.059 treq: not required 00:08:22.059 portid: 0 00:08:22.059 trsvcid: 4420 00:08:22.059 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:22.059 traddr: 10.0.0.2 00:08:22.059 eflags: explicit discovery connections, duplicate discovery information 00:08:22.059 sectype: none 00:08:22.059 =====Discovery Log Entry 1====== 00:08:22.059 trtype: tcp 00:08:22.059 adrfam: ipv4 00:08:22.059 subtype: nvme subsystem 00:08:22.060 treq: not required 00:08:22.060 portid: 0 00:08:22.060 trsvcid: 4420 00:08:22.060 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:22.060 traddr: 10.0.0.2 00:08:22.060 eflags: none 00:08:22.060 sectype: none 00:08:22.060 =====Discovery Log Entry 2====== 00:08:22.060 trtype: tcp 00:08:22.060 adrfam: ipv4 00:08:22.060 subtype: nvme subsystem 00:08:22.060 treq: not required 00:08:22.060 portid: 0 00:08:22.060 trsvcid: 4420 00:08:22.060 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:22.060 traddr: 10.0.0.2 00:08:22.060 eflags: none 00:08:22.060 sectype: none 00:08:22.060 =====Discovery Log Entry 3====== 00:08:22.060 trtype: tcp 00:08:22.060 adrfam: ipv4 00:08:22.060 subtype: nvme subsystem 00:08:22.060 treq: not required 00:08:22.060 portid: 0 00:08:22.060 trsvcid: 4420 00:08:22.060 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:22.060 traddr: 10.0.0.2 00:08:22.060 eflags: none 00:08:22.060 sectype: none 00:08:22.060 =====Discovery Log Entry 4====== 00:08:22.060 trtype: tcp 00:08:22.060 adrfam: ipv4 00:08:22.060 subtype: nvme subsystem 00:08:22.060 treq: not required 00:08:22.060 portid: 0 00:08:22.060 trsvcid: 4420 00:08:22.060 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:22.060 traddr: 10.0.0.2 00:08:22.060 eflags: none 00:08:22.060 sectype: none 00:08:22.060 =====Discovery Log Entry 5====== 00:08:22.060 trtype: tcp 00:08:22.060 adrfam: ipv4 00:08:22.060 subtype: discovery subsystem referral 00:08:22.060 treq: not required 00:08:22.060 portid: 0 00:08:22.060 trsvcid: 4430 00:08:22.060 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:22.060 traddr: 10.0.0.2 00:08:22.060 eflags: none 00:08:22.060 sectype: none 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:22.060 Perform nvmf subsystem discovery via RPC 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 [ 00:08:22.060 { 00:08:22.060 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:22.060 "subtype": "Discovery", 00:08:22.060 "listen_addresses": [ 00:08:22.060 { 00:08:22.060 "trtype": "TCP", 00:08:22.060 "adrfam": "IPv4", 00:08:22.060 "traddr": "10.0.0.2", 00:08:22.060 "trsvcid": "4420" 00:08:22.060 } 00:08:22.060 ], 00:08:22.060 "allow_any_host": true, 00:08:22.060 "hosts": [] 00:08:22.060 }, 00:08:22.060 { 00:08:22.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.060 "subtype": "NVMe", 00:08:22.060 "listen_addresses": [ 00:08:22.060 { 00:08:22.060 "trtype": "TCP", 00:08:22.060 "adrfam": "IPv4", 00:08:22.060 "traddr": "10.0.0.2", 00:08:22.060 "trsvcid": "4420" 00:08:22.060 } 00:08:22.060 ], 00:08:22.060 "allow_any_host": true, 00:08:22.060 "hosts": [], 00:08:22.060 "serial_number": "SPDK00000000000001", 00:08:22.060 "model_number": "SPDK bdev Controller", 00:08:22.060 "max_namespaces": 32, 00:08:22.060 "min_cntlid": 1, 00:08:22.060 "max_cntlid": 65519, 00:08:22.060 "namespaces": [ 00:08:22.060 { 00:08:22.060 "nsid": 1, 00:08:22.060 "bdev_name": "Null1", 00:08:22.060 "name": "Null1", 00:08:22.060 "nguid": "2C92D4609E014F458D599D72E0E338A5", 00:08:22.060 "uuid": "2c92d460-9e01-4f45-8d59-9d72e0e338a5" 00:08:22.060 } 00:08:22.060 ] 00:08:22.060 }, 00:08:22.060 { 00:08:22.060 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:22.060 "subtype": "NVMe", 00:08:22.060 "listen_addresses": [ 00:08:22.060 { 00:08:22.060 "trtype": "TCP", 00:08:22.060 "adrfam": "IPv4", 00:08:22.060 "traddr": "10.0.0.2", 00:08:22.060 "trsvcid": "4420" 00:08:22.060 } 00:08:22.060 ], 00:08:22.060 "allow_any_host": true, 00:08:22.060 "hosts": [], 00:08:22.060 "serial_number": "SPDK00000000000002", 00:08:22.060 "model_number": "SPDK bdev Controller", 00:08:22.060 "max_namespaces": 32, 00:08:22.060 "min_cntlid": 1, 00:08:22.060 "max_cntlid": 65519, 00:08:22.060 "namespaces": [ 00:08:22.060 { 00:08:22.060 "nsid": 1, 00:08:22.060 "bdev_name": "Null2", 00:08:22.060 "name": "Null2", 00:08:22.060 "nguid": "8F15439E0114419894D74676FFBABA27", 00:08:22.060 "uuid": "8f15439e-0114-4198-94d7-4676ffbaba27" 00:08:22.060 } 00:08:22.060 ] 00:08:22.060 }, 00:08:22.060 { 00:08:22.060 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:22.060 "subtype": "NVMe", 00:08:22.060 "listen_addresses": [ 00:08:22.060 { 00:08:22.060 "trtype": "TCP", 00:08:22.060 "adrfam": "IPv4", 00:08:22.060 "traddr": "10.0.0.2", 00:08:22.060 "trsvcid": "4420" 00:08:22.060 } 00:08:22.060 ], 00:08:22.060 "allow_any_host": true, 00:08:22.060 "hosts": [], 00:08:22.060 "serial_number": "SPDK00000000000003", 00:08:22.060 "model_number": "SPDK bdev Controller", 00:08:22.060 "max_namespaces": 32, 00:08:22.060 "min_cntlid": 1, 00:08:22.060 "max_cntlid": 65519, 00:08:22.060 "namespaces": [ 00:08:22.060 { 00:08:22.060 "nsid": 1, 00:08:22.060 "bdev_name": "Null3", 00:08:22.060 "name": "Null3", 00:08:22.060 "nguid": "EAC80D6D70B04EF8B23F9DFBAC3E952C", 00:08:22.060 "uuid": "eac80d6d-70b0-4ef8-b23f-9dfbac3e952c" 00:08:22.060 } 00:08:22.060 ] 00:08:22.060 }, 00:08:22.060 { 00:08:22.060 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:22.060 "subtype": "NVMe", 00:08:22.060 "listen_addresses": [ 00:08:22.060 { 00:08:22.060 "trtype": "TCP", 00:08:22.060 "adrfam": "IPv4", 00:08:22.060 "traddr": "10.0.0.2", 00:08:22.060 "trsvcid": "4420" 00:08:22.060 } 00:08:22.060 ], 00:08:22.060 "allow_any_host": true, 00:08:22.060 "hosts": [], 00:08:22.060 "serial_number": "SPDK00000000000004", 00:08:22.060 "model_number": "SPDK bdev Controller", 00:08:22.060 "max_namespaces": 32, 00:08:22.060 "min_cntlid": 1, 00:08:22.060 "max_cntlid": 65519, 00:08:22.060 "namespaces": [ 00:08:22.060 { 00:08:22.060 "nsid": 1, 00:08:22.060 "bdev_name": "Null4", 00:08:22.060 "name": "Null4", 00:08:22.060 "nguid": "A1031455D1844A37869AB1A00845693C", 00:08:22.060 "uuid": "a1031455-d184-4a37-869a-b1a00845693c" 00:08:22.060 } 00:08:22.060 ] 00:08:22.060 } 00:08:22.060 ] 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.060 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:22.061 rmmod nvme_tcp 00:08:22.061 rmmod nvme_fabrics 00:08:22.061 rmmod nvme_keyring 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1348379 ']' 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1348379 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 1348379 ']' 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 1348379 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:22.061 13:43:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1348379 00:08:22.061 13:44:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:22.061 13:44:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:22.061 13:44:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1348379' 00:08:22.061 killing process with pid 1348379 00:08:22.061 13:44:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 1348379 00:08:22.061 13:44:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 1348379 00:08:22.319 13:44:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:22.319 13:44:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:22.319 13:44:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:22.319 13:44:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.319 13:44:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.319 13:44:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.319 13:44:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.319 13:44:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.937 13:44:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.937 00:08:24.937 real 0m5.383s 00:08:24.937 user 0m4.511s 00:08:24.937 sys 0m1.765s 00:08:24.937 13:44:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.937 13:44:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.937 ************************************ 00:08:24.937 END TEST nvmf_target_discovery 00:08:24.937 ************************************ 00:08:24.937 13:44:02 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:24.937 13:44:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:24.937 13:44:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.937 13:44:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.938 ************************************ 00:08:24.938 START TEST nvmf_referrals 00:08:24.938 ************************************ 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:24.938 * Looking for test storage... 00:08:24.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.938 13:44:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:26.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:26.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:26.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:26.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:26.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:08:26.839 00:08:26.839 --- 10.0.0.2 ping statistics --- 00:08:26.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.839 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:08:26.839 00:08:26.839 --- 10.0.0.1 ping statistics --- 00:08:26.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.839 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1350490 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1350490 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 1350490 ']' 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:26.839 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.839 [2024-07-14 13:44:04.662626] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:26.840 [2024-07-14 13:44:04.662706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.840 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.840 [2024-07-14 13:44:04.728694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.840 [2024-07-14 13:44:04.818523] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.840 [2024-07-14 13:44:04.818576] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.840 [2024-07-14 13:44:04.818589] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.840 [2024-07-14 13:44:04.818615] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.840 [2024-07-14 13:44:04.818625] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.840 [2024-07-14 13:44:04.818702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.840 [2024-07-14 13:44:04.818769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.840 [2024-07-14 13:44:04.818819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.840 [2024-07-14 13:44:04.818821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.097 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:27.097 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:27.097 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:27.097 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.097 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.097 13:44:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.097 13:44:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.097 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.097 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.097 [2024-07-14 13:44:04.982621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.097 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.098 13:44:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:27.098 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.098 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.098 [2024-07-14 13:44:04.994870] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:27.098 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.098 13:44:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:27.098 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.098 13:44:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:27.098 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.356 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:27.356 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:27.356 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:27.356 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:27.356 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:27.356 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.356 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:27.356 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:27.614 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.872 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:27.872 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:27.872 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:27.872 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:27.872 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:27.872 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.872 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:27.872 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.130 13:44:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:28.130 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:28.130 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:28.130 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:28.130 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:28.130 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:28.130 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.130 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.388 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.646 rmmod nvme_tcp 00:08:28.646 rmmod nvme_fabrics 00:08:28.646 rmmod nvme_keyring 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1350490 ']' 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1350490 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 1350490 ']' 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 1350490 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1350490 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1350490' 00:08:28.646 killing process with pid 1350490 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 1350490 00:08:28.646 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 1350490 00:08:28.906 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.906 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.906 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.906 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.906 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.906 13:44:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.906 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.906 13:44:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.808 13:44:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:30.808 00:08:30.808 real 0m6.416s 00:08:30.808 user 0m8.821s 00:08:30.808 sys 0m2.149s 00:08:30.808 13:44:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:30.808 13:44:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.809 ************************************ 00:08:30.809 END TEST nvmf_referrals 00:08:30.809 ************************************ 00:08:30.809 13:44:08 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:30.809 13:44:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:30.809 13:44:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:30.809 13:44:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:31.067 ************************************ 00:08:31.067 START TEST nvmf_connect_disconnect 00:08:31.067 ************************************ 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:31.067 * Looking for test storage... 00:08:31.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:31.067 13:44:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.973 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.973 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:32.974 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:32.974 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:32.974 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:32.974 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.974 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.232 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.232 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.232 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:33.232 13:44:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:33.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:08:33.232 00:08:33.232 --- 10.0.0.2 ping statistics --- 00:08:33.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.232 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:08:33.232 00:08:33.232 --- 10.0.0.1 ping statistics --- 00:08:33.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.232 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1352663 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1352663 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 1352663 ']' 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:33.232 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.233 [2024-07-14 13:44:11.094935] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:33.233 [2024-07-14 13:44:11.095002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.233 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.233 [2024-07-14 13:44:11.161968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.490 [2024-07-14 13:44:11.256546] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.490 [2024-07-14 13:44:11.256601] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.490 [2024-07-14 13:44:11.256617] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.490 [2024-07-14 13:44:11.256630] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.491 [2024-07-14 13:44:11.256642] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.491 [2024-07-14 13:44:11.257038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.491 [2024-07-14 13:44:11.257069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.491 [2024-07-14 13:44:11.257118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.491 [2024-07-14 13:44:11.257121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.491 [2024-07-14 13:44:11.419703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.491 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.748 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.748 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.748 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.748 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.748 [2024-07-14 13:44:11.480921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.748 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.748 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:33.748 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:33.748 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:33.748 13:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:36.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:24.206 rmmod nvme_tcp 00:12:24.206 rmmod nvme_fabrics 00:12:24.206 rmmod nvme_keyring 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1352663 ']' 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1352663 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1352663 ']' 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 1352663 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1352663 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1352663' 00:12:24.206 killing process with pid 1352663 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 1352663 00:12:24.206 13:48:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 1352663 00:12:24.489 13:48:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.489 13:48:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.489 13:48:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.489 13:48:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.489 13:48:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.489 13:48:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.489 13:48:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.489 13:48:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.393 13:48:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:26.393 00:12:26.393 real 3m55.479s 00:12:26.393 user 14m57.861s 00:12:26.393 sys 0m33.584s 00:12:26.393 13:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:26.393 13:48:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.393 ************************************ 00:12:26.393 END TEST nvmf_connect_disconnect 00:12:26.393 ************************************ 00:12:26.393 13:48:04 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:26.393 13:48:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:26.393 13:48:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:26.393 13:48:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:26.393 ************************************ 00:12:26.393 START TEST nvmf_multitarget 00:12:26.393 ************************************ 00:12:26.393 13:48:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:26.651 * Looking for test storage... 00:12:26.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.651 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:26.652 13:48:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:28.551 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:28.551 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:28.551 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:28.551 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:12:28.551 00:12:28.551 --- 10.0.0.2 ping statistics --- 00:12:28.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.551 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:12:28.551 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:12:28.552 00:12:28.552 --- 10.0.0.1 ping statistics --- 00:12:28.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.552 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:28.552 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.552 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:28.552 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.552 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.552 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.552 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.552 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.552 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.552 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1383741 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1383741 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 1383741 ']' 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:28.810 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.810 [2024-07-14 13:48:06.599799] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:28.810 [2024-07-14 13:48:06.599897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.810 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.810 [2024-07-14 13:48:06.675247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.810 [2024-07-14 13:48:06.761653] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.810 [2024-07-14 13:48:06.761724] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.810 [2024-07-14 13:48:06.761737] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.810 [2024-07-14 13:48:06.761748] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.810 [2024-07-14 13:48:06.761773] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.810 [2024-07-14 13:48:06.761821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.810 [2024-07-14 13:48:06.761893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.810 [2024-07-14 13:48:06.761926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.810 [2024-07-14 13:48:06.761929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.069 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:29.069 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:29.069 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.069 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.069 13:48:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.069 13:48:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.069 13:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:29.069 13:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.069 13:48:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:29.069 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:29.069 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:29.327 "nvmf_tgt_1" 00:12:29.327 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:29.327 "nvmf_tgt_2" 00:12:29.327 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.327 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:29.585 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:29.585 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:29.585 true 00:12:29.585 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:29.843 true 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.843 rmmod nvme_tcp 00:12:29.843 rmmod nvme_fabrics 00:12:29.843 rmmod nvme_keyring 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1383741 ']' 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1383741 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 1383741 ']' 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 1383741 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1383741 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1383741' 00:12:29.843 killing process with pid 1383741 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 1383741 00:12:29.843 13:48:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 1383741 00:12:30.102 13:48:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.102 13:48:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:30.102 13:48:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:30.102 13:48:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.102 13:48:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.102 13:48:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.102 13:48:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.102 13:48:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.636 13:48:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:32.636 00:12:32.636 real 0m5.743s 00:12:32.636 user 0m6.502s 00:12:32.636 sys 0m1.901s 00:12:32.636 13:48:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:32.636 13:48:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.636 ************************************ 00:12:32.636 END TEST nvmf_multitarget 00:12:32.636 ************************************ 00:12:32.636 13:48:10 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:32.637 13:48:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:32.637 13:48:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:32.637 13:48:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:32.637 ************************************ 00:12:32.637 START TEST nvmf_rpc 00:12:32.637 ************************************ 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:32.637 * Looking for test storage... 00:12:32.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.637 13:48:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:34.543 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:34.543 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:34.543 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:34.543 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.543 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:12:34.544 00:12:34.544 --- 10.0.0.2 ping statistics --- 00:12:34.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.544 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:12:34.544 00:12:34.544 --- 10.0.0.1 ping statistics --- 00:12:34.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.544 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1386339 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1386339 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 1386339 ']' 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:34.544 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.544 [2024-07-14 13:48:12.272474] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:34.544 [2024-07-14 13:48:12.272550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.544 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.544 [2024-07-14 13:48:12.342129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.544 [2024-07-14 13:48:12.435554] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.544 [2024-07-14 13:48:12.435617] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.544 [2024-07-14 13:48:12.435633] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.544 [2024-07-14 13:48:12.435646] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.544 [2024-07-14 13:48:12.435658] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.544 [2024-07-14 13:48:12.435755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.544 [2024-07-14 13:48:12.435824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.544 [2024-07-14 13:48:12.435924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.544 [2024-07-14 13:48:12.435927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:34.803 "tick_rate": 2700000000, 00:12:34.803 "poll_groups": [ 00:12:34.803 { 00:12:34.803 "name": "nvmf_tgt_poll_group_000", 00:12:34.803 "admin_qpairs": 0, 00:12:34.803 "io_qpairs": 0, 00:12:34.803 "current_admin_qpairs": 0, 00:12:34.803 "current_io_qpairs": 0, 00:12:34.803 "pending_bdev_io": 0, 00:12:34.803 "completed_nvme_io": 0, 00:12:34.803 "transports": [] 00:12:34.803 }, 00:12:34.803 { 00:12:34.803 "name": "nvmf_tgt_poll_group_001", 00:12:34.803 "admin_qpairs": 0, 00:12:34.803 "io_qpairs": 0, 00:12:34.803 "current_admin_qpairs": 0, 00:12:34.803 "current_io_qpairs": 0, 00:12:34.803 "pending_bdev_io": 0, 00:12:34.803 "completed_nvme_io": 0, 00:12:34.803 "transports": [] 00:12:34.803 }, 00:12:34.803 { 00:12:34.803 "name": "nvmf_tgt_poll_group_002", 00:12:34.803 "admin_qpairs": 0, 00:12:34.803 "io_qpairs": 0, 00:12:34.803 "current_admin_qpairs": 0, 00:12:34.803 "current_io_qpairs": 0, 00:12:34.803 "pending_bdev_io": 0, 00:12:34.803 "completed_nvme_io": 0, 00:12:34.803 "transports": [] 00:12:34.803 }, 00:12:34.803 { 00:12:34.803 "name": "nvmf_tgt_poll_group_003", 00:12:34.803 "admin_qpairs": 0, 00:12:34.803 "io_qpairs": 0, 00:12:34.803 "current_admin_qpairs": 0, 00:12:34.803 "current_io_qpairs": 0, 00:12:34.803 "pending_bdev_io": 0, 00:12:34.803 "completed_nvme_io": 0, 00:12:34.803 "transports": [] 00:12:34.803 } 00:12:34.803 ] 00:12:34.803 }' 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.803 [2024-07-14 13:48:12.687104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:34.803 "tick_rate": 2700000000, 00:12:34.803 "poll_groups": [ 00:12:34.803 { 00:12:34.803 "name": "nvmf_tgt_poll_group_000", 00:12:34.803 "admin_qpairs": 0, 00:12:34.803 "io_qpairs": 0, 00:12:34.803 "current_admin_qpairs": 0, 00:12:34.803 "current_io_qpairs": 0, 00:12:34.803 "pending_bdev_io": 0, 00:12:34.803 "completed_nvme_io": 0, 00:12:34.803 "transports": [ 00:12:34.803 { 00:12:34.803 "trtype": "TCP" 00:12:34.803 } 00:12:34.803 ] 00:12:34.803 }, 00:12:34.803 { 00:12:34.803 "name": "nvmf_tgt_poll_group_001", 00:12:34.803 "admin_qpairs": 0, 00:12:34.803 "io_qpairs": 0, 00:12:34.803 "current_admin_qpairs": 0, 00:12:34.803 "current_io_qpairs": 0, 00:12:34.803 "pending_bdev_io": 0, 00:12:34.803 "completed_nvme_io": 0, 00:12:34.803 "transports": [ 00:12:34.803 { 00:12:34.803 "trtype": "TCP" 00:12:34.803 } 00:12:34.803 ] 00:12:34.803 }, 00:12:34.803 { 00:12:34.803 "name": "nvmf_tgt_poll_group_002", 00:12:34.803 "admin_qpairs": 0, 00:12:34.803 "io_qpairs": 0, 00:12:34.803 "current_admin_qpairs": 0, 00:12:34.803 "current_io_qpairs": 0, 00:12:34.803 "pending_bdev_io": 0, 00:12:34.803 "completed_nvme_io": 0, 00:12:34.803 "transports": [ 00:12:34.803 { 00:12:34.803 "trtype": "TCP" 00:12:34.803 } 00:12:34.803 ] 00:12:34.803 }, 00:12:34.803 { 00:12:34.803 "name": "nvmf_tgt_poll_group_003", 00:12:34.803 "admin_qpairs": 0, 00:12:34.803 "io_qpairs": 0, 00:12:34.803 "current_admin_qpairs": 0, 00:12:34.803 "current_io_qpairs": 0, 00:12:34.803 "pending_bdev_io": 0, 00:12:34.803 "completed_nvme_io": 0, 00:12:34.803 "transports": [ 00:12:34.803 { 00:12:34.803 "trtype": "TCP" 00:12:34.803 } 00:12:34.803 ] 00:12:34.803 } 00:12:34.803 ] 00:12:34.803 }' 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:34.803 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.804 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:34.804 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:34.804 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:34.804 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:34.804 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.065 Malloc1 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.065 [2024-07-14 13:48:12.842811] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:35.065 [2024-07-14 13:48:12.865235] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:35.065 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:35.065 could not add new controller: failed to write to nvme-fabrics device 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.065 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.635 13:48:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.635 13:48:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:35.635 13:48:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.635 13:48:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:35.635 13:48:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.160 [2024-07-14 13:48:15.686739] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:38.160 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:38.160 could not add new controller: failed to write to nvme-fabrics device 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.160 13:48:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.416 13:48:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.416 13:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:38.416 13:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.416 13:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:38.416 13:48:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.936 [2024-07-14 13:48:18.493983] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.936 13:48:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.500 13:48:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.500 13:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:41.500 13:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.500 13:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:41.500 13:48:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.395 [2024-07-14 13:48:21.307946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.395 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.396 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.396 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.396 13:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.396 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.396 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.396 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.396 13:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.328 13:48:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.328 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:44.328 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.328 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:44.328 13:48:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.264 [2024-07-14 13:48:24.120693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.264 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.265 13:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.196 13:48:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.196 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:47.196 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.196 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:47.196 13:48:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.094 [2024-07-14 13:48:26.991227] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.094 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.095 13:48:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.095 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.095 13:48:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.095 13:48:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.095 13:48:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.095 13:48:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.095 13:48:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.095 13:48:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.095 13:48:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.033 13:48:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.033 13:48:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:50.033 13:48:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.033 13:48:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:50.033 13:48:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.931 [2024-07-14 13:48:29.796813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.931 13:48:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.497 13:48:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.497 13:48:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:52.497 13:48:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.497 13:48:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:52.497 13:48:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 [2024-07-14 13:48:32.522694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 [2024-07-14 13:48:32.570763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.028 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 [2024-07-14 13:48:32.618946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 [2024-07-14 13:48:32.667087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 [2024-07-14 13:48:32.715269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:55.029 "tick_rate": 2700000000, 00:12:55.029 "poll_groups": [ 00:12:55.029 { 00:12:55.029 "name": "nvmf_tgt_poll_group_000", 00:12:55.029 "admin_qpairs": 2, 00:12:55.029 "io_qpairs": 84, 00:12:55.029 "current_admin_qpairs": 0, 00:12:55.029 "current_io_qpairs": 0, 00:12:55.029 "pending_bdev_io": 0, 00:12:55.029 "completed_nvme_io": 133, 00:12:55.029 "transports": [ 00:12:55.029 { 00:12:55.029 "trtype": "TCP" 00:12:55.029 } 00:12:55.029 ] 00:12:55.029 }, 00:12:55.029 { 00:12:55.029 "name": "nvmf_tgt_poll_group_001", 00:12:55.029 "admin_qpairs": 2, 00:12:55.029 "io_qpairs": 84, 00:12:55.029 "current_admin_qpairs": 0, 00:12:55.029 "current_io_qpairs": 0, 00:12:55.029 "pending_bdev_io": 0, 00:12:55.029 "completed_nvme_io": 184, 00:12:55.029 "transports": [ 00:12:55.029 { 00:12:55.029 "trtype": "TCP" 00:12:55.029 } 00:12:55.029 ] 00:12:55.029 }, 00:12:55.029 { 00:12:55.029 "name": "nvmf_tgt_poll_group_002", 00:12:55.029 "admin_qpairs": 1, 00:12:55.029 "io_qpairs": 84, 00:12:55.029 "current_admin_qpairs": 0, 00:12:55.029 "current_io_qpairs": 0, 00:12:55.029 "pending_bdev_io": 0, 00:12:55.029 "completed_nvme_io": 183, 00:12:55.029 "transports": [ 00:12:55.029 { 00:12:55.029 "trtype": "TCP" 00:12:55.029 } 00:12:55.029 ] 00:12:55.029 }, 00:12:55.029 { 00:12:55.029 "name": "nvmf_tgt_poll_group_003", 00:12:55.029 "admin_qpairs": 2, 00:12:55.029 "io_qpairs": 84, 00:12:55.029 "current_admin_qpairs": 0, 00:12:55.029 "current_io_qpairs": 0, 00:12:55.029 "pending_bdev_io": 0, 00:12:55.029 "completed_nvme_io": 186, 00:12:55.029 "transports": [ 00:12:55.029 { 00:12:55.029 "trtype": "TCP" 00:12:55.029 } 00:12:55.029 ] 00:12:55.029 } 00:12:55.029 ] 00:12:55.029 }' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.029 rmmod nvme_tcp 00:12:55.029 rmmod nvme_fabrics 00:12:55.029 rmmod nvme_keyring 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1386339 ']' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1386339 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 1386339 ']' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 1386339 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1386339 00:12:55.029 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:55.030 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:55.030 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1386339' 00:12:55.030 killing process with pid 1386339 00:12:55.030 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 1386339 00:12:55.030 13:48:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 1386339 00:12:55.288 13:48:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:55.288 13:48:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:55.288 13:48:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:55.288 13:48:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.288 13:48:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:55.288 13:48:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.288 13:48:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.288 13:48:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.817 13:48:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:57.817 00:12:57.817 real 0m25.107s 00:12:57.817 user 1m22.261s 00:12:57.817 sys 0m3.897s 00:12:57.817 13:48:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:57.817 13:48:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.817 ************************************ 00:12:57.817 END TEST nvmf_rpc 00:12:57.817 ************************************ 00:12:57.817 13:48:35 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:57.817 13:48:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:57.817 13:48:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:57.817 13:48:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.817 ************************************ 00:12:57.817 START TEST nvmf_invalid 00:12:57.817 ************************************ 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:57.817 * Looking for test storage... 00:12:57.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.817 13:48:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:57.818 13:48:35 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.728 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:59.729 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:59.729 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:59.729 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:59.729 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:59.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:12:59.729 00:12:59.729 --- 10.0.0.2 ping statistics --- 00:12:59.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.729 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:12:59.729 00:12:59.729 --- 10.0.0.1 ping statistics --- 00:12:59.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.729 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1390934 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1390934 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 1390934 ']' 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:59.729 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.729 [2024-07-14 13:48:37.483383] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:59.729 [2024-07-14 13:48:37.483489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.729 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.729 [2024-07-14 13:48:37.564671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.729 [2024-07-14 13:48:37.663998] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.729 [2024-07-14 13:48:37.664063] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.729 [2024-07-14 13:48:37.664079] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.729 [2024-07-14 13:48:37.664092] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.729 [2024-07-14 13:48:37.664103] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.729 [2024-07-14 13:48:37.664186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.729 [2024-07-14 13:48:37.664242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.729 [2024-07-14 13:48:37.664294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.729 [2024-07-14 13:48:37.664296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.988 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:59.988 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:59.988 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.988 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.988 13:48:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.988 13:48:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.988 13:48:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:59.988 13:48:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3968 00:13:00.246 [2024-07-14 13:48:38.088483] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:00.246 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:00.246 { 00:13:00.246 "nqn": "nqn.2016-06.io.spdk:cnode3968", 00:13:00.246 "tgt_name": "foobar", 00:13:00.246 "method": "nvmf_create_subsystem", 00:13:00.246 "req_id": 1 00:13:00.246 } 00:13:00.246 Got JSON-RPC error response 00:13:00.246 response: 00:13:00.246 { 00:13:00.246 "code": -32603, 00:13:00.246 "message": "Unable to find target foobar" 00:13:00.246 }' 00:13:00.246 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:00.246 { 00:13:00.246 "nqn": "nqn.2016-06.io.spdk:cnode3968", 00:13:00.246 "tgt_name": "foobar", 00:13:00.246 "method": "nvmf_create_subsystem", 00:13:00.246 "req_id": 1 00:13:00.246 } 00:13:00.246 Got JSON-RPC error response 00:13:00.246 response: 00:13:00.246 { 00:13:00.246 "code": -32603, 00:13:00.246 "message": "Unable to find target foobar" 00:13:00.246 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:00.246 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:00.246 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12363 00:13:00.504 [2024-07-14 13:48:38.333312] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12363: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:00.504 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:00.504 { 00:13:00.504 "nqn": "nqn.2016-06.io.spdk:cnode12363", 00:13:00.504 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:00.504 "method": "nvmf_create_subsystem", 00:13:00.504 "req_id": 1 00:13:00.504 } 00:13:00.504 Got JSON-RPC error response 00:13:00.504 response: 00:13:00.504 { 00:13:00.504 "code": -32602, 00:13:00.504 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:00.504 }' 00:13:00.504 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:00.504 { 00:13:00.504 "nqn": "nqn.2016-06.io.spdk:cnode12363", 00:13:00.504 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:00.504 "method": "nvmf_create_subsystem", 00:13:00.504 "req_id": 1 00:13:00.504 } 00:13:00.504 Got JSON-RPC error response 00:13:00.504 response: 00:13:00.504 { 00:13:00.504 "code": -32602, 00:13:00.504 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:00.504 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:00.504 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:00.504 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25280 00:13:00.763 [2024-07-14 13:48:38.614257] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25280: invalid model number 'SPDK_Controller' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:00.763 { 00:13:00.763 "nqn": "nqn.2016-06.io.spdk:cnode25280", 00:13:00.763 "model_number": "SPDK_Controller\u001f", 00:13:00.763 "method": "nvmf_create_subsystem", 00:13:00.763 "req_id": 1 00:13:00.763 } 00:13:00.763 Got JSON-RPC error response 00:13:00.763 response: 00:13:00.763 { 00:13:00.763 "code": -32602, 00:13:00.763 "message": "Invalid MN SPDK_Controller\u001f" 00:13:00.763 }' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:00.763 { 00:13:00.763 "nqn": "nqn.2016-06.io.spdk:cnode25280", 00:13:00.763 "model_number": "SPDK_Controller\u001f", 00:13:00.763 "method": "nvmf_create_subsystem", 00:13:00.763 "req_id": 1 00:13:00.763 } 00:13:00.763 Got JSON-RPC error response 00:13:00.763 response: 00:13:00.763 { 00:13:00.763 "code": -32602, 00:13:00.763 "message": "Invalid MN SPDK_Controller\u001f" 00:13:00.763 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:00.763 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ? == \- ]] 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '?_Rm7gws{hE6PQ^XVG'\''qR' 00:13:00.764 13:48:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '?_Rm7gws{hE6PQ^XVG'\''qR' nqn.2016-06.io.spdk:cnode21533 00:13:01.022 [2024-07-14 13:48:38.983467] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21533: invalid serial number '?_Rm7gws{hE6PQ^XVG'qR' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:01.281 { 00:13:01.281 "nqn": "nqn.2016-06.io.spdk:cnode21533", 00:13:01.281 "serial_number": "?_Rm7gws{hE6PQ^XVG'\''qR", 00:13:01.281 "method": "nvmf_create_subsystem", 00:13:01.281 "req_id": 1 00:13:01.281 } 00:13:01.281 Got JSON-RPC error response 00:13:01.281 response: 00:13:01.281 { 00:13:01.281 "code": -32602, 00:13:01.281 "message": "Invalid SN ?_Rm7gws{hE6PQ^XVG'\''qR" 00:13:01.281 }' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:01.281 { 00:13:01.281 "nqn": "nqn.2016-06.io.spdk:cnode21533", 00:13:01.281 "serial_number": "?_Rm7gws{hE6PQ^XVG'qR", 00:13:01.281 "method": "nvmf_create_subsystem", 00:13:01.281 "req_id": 1 00:13:01.281 } 00:13:01.281 Got JSON-RPC error response 00:13:01.281 response: 00:13:01.281 { 00:13:01.281 "code": -32602, 00:13:01.281 "message": "Invalid SN ?_Rm7gws{hE6PQ^XVG'qR" 00:13:01.281 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.281 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'dd gmC`AcK]ioLa8I]"|gwn2H`dE1f6DAVy-A(295' 00:13:01.282 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'dd gmC`AcK]ioLa8I]"|gwn2H`dE1f6DAVy-A(295' nqn.2016-06.io.spdk:cnode7198 00:13:01.540 [2024-07-14 13:48:39.384724] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7198: invalid model number 'dd gmC`AcK]ioLa8I]"|gwn2H`dE1f6DAVy-A(295' 00:13:01.540 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:01.540 { 00:13:01.540 "nqn": "nqn.2016-06.io.spdk:cnode7198", 00:13:01.540 "model_number": "dd gmC`AcK]ioLa8I]\"|gwn2H`dE1f6DAVy-A(295", 00:13:01.540 "method": "nvmf_create_subsystem", 00:13:01.540 "req_id": 1 00:13:01.540 } 00:13:01.540 Got JSON-RPC error response 00:13:01.540 response: 00:13:01.540 { 00:13:01.540 "code": -32602, 00:13:01.540 "message": "Invalid MN dd gmC`AcK]ioLa8I]\"|gwn2H`dE1f6DAVy-A(295" 00:13:01.540 }' 00:13:01.540 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:01.540 { 00:13:01.540 "nqn": "nqn.2016-06.io.spdk:cnode7198", 00:13:01.540 "model_number": "dd gmC`AcK]ioLa8I]\"|gwn2H`dE1f6DAVy-A(295", 00:13:01.540 "method": "nvmf_create_subsystem", 00:13:01.540 "req_id": 1 00:13:01.540 } 00:13:01.540 Got JSON-RPC error response 00:13:01.540 response: 00:13:01.540 { 00:13:01.540 "code": -32602, 00:13:01.540 "message": "Invalid MN dd gmC`AcK]ioLa8I]\"|gwn2H`dE1f6DAVy-A(295" 00:13:01.540 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:01.540 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:01.798 [2024-07-14 13:48:39.633648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.798 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:02.056 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:02.056 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:02.056 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:02.056 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:02.056 13:48:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:02.314 [2024-07-14 13:48:40.151348] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:02.314 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:02.314 { 00:13:02.314 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:02.314 "listen_address": { 00:13:02.314 "trtype": "tcp", 00:13:02.314 "traddr": "", 00:13:02.314 "trsvcid": "4421" 00:13:02.314 }, 00:13:02.314 "method": "nvmf_subsystem_remove_listener", 00:13:02.314 "req_id": 1 00:13:02.314 } 00:13:02.314 Got JSON-RPC error response 00:13:02.314 response: 00:13:02.314 { 00:13:02.314 "code": -32602, 00:13:02.314 "message": "Invalid parameters" 00:13:02.314 }' 00:13:02.314 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:02.314 { 00:13:02.314 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:02.314 "listen_address": { 00:13:02.314 "trtype": "tcp", 00:13:02.314 "traddr": "", 00:13:02.314 "trsvcid": "4421" 00:13:02.314 }, 00:13:02.314 "method": "nvmf_subsystem_remove_listener", 00:13:02.314 "req_id": 1 00:13:02.314 } 00:13:02.314 Got JSON-RPC error response 00:13:02.314 response: 00:13:02.314 { 00:13:02.314 "code": -32602, 00:13:02.314 "message": "Invalid parameters" 00:13:02.314 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:02.314 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11359 -i 0 00:13:02.572 [2024-07-14 13:48:40.396103] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11359: invalid cntlid range [0-65519] 00:13:02.572 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:02.572 { 00:13:02.572 "nqn": "nqn.2016-06.io.spdk:cnode11359", 00:13:02.572 "min_cntlid": 0, 00:13:02.572 "method": "nvmf_create_subsystem", 00:13:02.572 "req_id": 1 00:13:02.572 } 00:13:02.572 Got JSON-RPC error response 00:13:02.572 response: 00:13:02.572 { 00:13:02.572 "code": -32602, 00:13:02.572 "message": "Invalid cntlid range [0-65519]" 00:13:02.572 }' 00:13:02.572 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:02.572 { 00:13:02.572 "nqn": "nqn.2016-06.io.spdk:cnode11359", 00:13:02.572 "min_cntlid": 0, 00:13:02.572 "method": "nvmf_create_subsystem", 00:13:02.572 "req_id": 1 00:13:02.572 } 00:13:02.572 Got JSON-RPC error response 00:13:02.572 response: 00:13:02.572 { 00:13:02.572 "code": -32602, 00:13:02.572 "message": "Invalid cntlid range [0-65519]" 00:13:02.572 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.572 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19866 -i 65520 00:13:02.830 [2024-07-14 13:48:40.636910] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19866: invalid cntlid range [65520-65519] 00:13:02.830 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:02.830 { 00:13:02.830 "nqn": "nqn.2016-06.io.spdk:cnode19866", 00:13:02.830 "min_cntlid": 65520, 00:13:02.830 "method": "nvmf_create_subsystem", 00:13:02.830 "req_id": 1 00:13:02.830 } 00:13:02.830 Got JSON-RPC error response 00:13:02.830 response: 00:13:02.830 { 00:13:02.830 "code": -32602, 00:13:02.830 "message": "Invalid cntlid range [65520-65519]" 00:13:02.830 }' 00:13:02.830 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:02.830 { 00:13:02.830 "nqn": "nqn.2016-06.io.spdk:cnode19866", 00:13:02.830 "min_cntlid": 65520, 00:13:02.830 "method": "nvmf_create_subsystem", 00:13:02.830 "req_id": 1 00:13:02.830 } 00:13:02.830 Got JSON-RPC error response 00:13:02.830 response: 00:13:02.830 { 00:13:02.830 "code": -32602, 00:13:02.830 "message": "Invalid cntlid range [65520-65519]" 00:13:02.830 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.830 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29774 -I 0 00:13:03.088 [2024-07-14 13:48:40.897845] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29774: invalid cntlid range [1-0] 00:13:03.088 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:03.088 { 00:13:03.088 "nqn": "nqn.2016-06.io.spdk:cnode29774", 00:13:03.088 "max_cntlid": 0, 00:13:03.088 "method": "nvmf_create_subsystem", 00:13:03.088 "req_id": 1 00:13:03.088 } 00:13:03.088 Got JSON-RPC error response 00:13:03.088 response: 00:13:03.088 { 00:13:03.088 "code": -32602, 00:13:03.088 "message": "Invalid cntlid range [1-0]" 00:13:03.088 }' 00:13:03.088 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:03.088 { 00:13:03.088 "nqn": "nqn.2016-06.io.spdk:cnode29774", 00:13:03.088 "max_cntlid": 0, 00:13:03.088 "method": "nvmf_create_subsystem", 00:13:03.088 "req_id": 1 00:13:03.088 } 00:13:03.088 Got JSON-RPC error response 00:13:03.088 response: 00:13:03.088 { 00:13:03.088 "code": -32602, 00:13:03.088 "message": "Invalid cntlid range [1-0]" 00:13:03.088 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.088 13:48:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11332 -I 65520 00:13:03.345 [2024-07-14 13:48:41.142639] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11332: invalid cntlid range [1-65520] 00:13:03.345 13:48:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:03.345 { 00:13:03.345 "nqn": "nqn.2016-06.io.spdk:cnode11332", 00:13:03.345 "max_cntlid": 65520, 00:13:03.345 "method": "nvmf_create_subsystem", 00:13:03.345 "req_id": 1 00:13:03.345 } 00:13:03.345 Got JSON-RPC error response 00:13:03.345 response: 00:13:03.345 { 00:13:03.345 "code": -32602, 00:13:03.345 "message": "Invalid cntlid range [1-65520]" 00:13:03.345 }' 00:13:03.345 13:48:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:03.345 { 00:13:03.345 "nqn": "nqn.2016-06.io.spdk:cnode11332", 00:13:03.345 "max_cntlid": 65520, 00:13:03.345 "method": "nvmf_create_subsystem", 00:13:03.345 "req_id": 1 00:13:03.345 } 00:13:03.345 Got JSON-RPC error response 00:13:03.345 response: 00:13:03.345 { 00:13:03.345 "code": -32602, 00:13:03.345 "message": "Invalid cntlid range [1-65520]" 00:13:03.345 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.345 13:48:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10103 -i 6 -I 5 00:13:03.602 [2024-07-14 13:48:41.391486] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10103: invalid cntlid range [6-5] 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:03.602 { 00:13:03.602 "nqn": "nqn.2016-06.io.spdk:cnode10103", 00:13:03.602 "min_cntlid": 6, 00:13:03.602 "max_cntlid": 5, 00:13:03.602 "method": "nvmf_create_subsystem", 00:13:03.602 "req_id": 1 00:13:03.602 } 00:13:03.602 Got JSON-RPC error response 00:13:03.602 response: 00:13:03.602 { 00:13:03.602 "code": -32602, 00:13:03.602 "message": "Invalid cntlid range [6-5]" 00:13:03.602 }' 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:03.602 { 00:13:03.602 "nqn": "nqn.2016-06.io.spdk:cnode10103", 00:13:03.602 "min_cntlid": 6, 00:13:03.602 "max_cntlid": 5, 00:13:03.602 "method": "nvmf_create_subsystem", 00:13:03.602 "req_id": 1 00:13:03.602 } 00:13:03.602 Got JSON-RPC error response 00:13:03.602 response: 00:13:03.602 { 00:13:03.602 "code": -32602, 00:13:03.602 "message": "Invalid cntlid range [6-5]" 00:13:03.602 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:03.602 { 00:13:03.602 "name": "foobar", 00:13:03.602 "method": "nvmf_delete_target", 00:13:03.602 "req_id": 1 00:13:03.602 } 00:13:03.602 Got JSON-RPC error response 00:13:03.602 response: 00:13:03.602 { 00:13:03.602 "code": -32602, 00:13:03.602 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:03.602 }' 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:03.602 { 00:13:03.602 "name": "foobar", 00:13:03.602 "method": "nvmf_delete_target", 00:13:03.602 "req_id": 1 00:13:03.602 } 00:13:03.602 Got JSON-RPC error response 00:13:03.602 response: 00:13:03.602 { 00:13:03.602 "code": -32602, 00:13:03.602 "message": "The specified target doesn't exist, cannot delete it." 00:13:03.602 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.602 rmmod nvme_tcp 00:13:03.602 rmmod nvme_fabrics 00:13:03.602 rmmod nvme_keyring 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.602 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:03.603 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:03.603 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1390934 ']' 00:13:03.603 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1390934 00:13:03.603 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 1390934 ']' 00:13:03.603 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 1390934 00:13:03.603 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:13:03.603 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:03.603 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1390934 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1390934' 00:13:03.861 killing process with pid 1390934 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 1390934 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 1390934 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.861 13:48:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.436 13:48:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.436 00:13:06.436 real 0m8.591s 00:13:06.436 user 0m20.299s 00:13:06.436 sys 0m2.372s 00:13:06.436 13:48:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.436 13:48:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.436 ************************************ 00:13:06.436 END TEST nvmf_invalid 00:13:06.436 ************************************ 00:13:06.436 13:48:43 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:06.436 13:48:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:06.436 13:48:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.436 13:48:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.436 ************************************ 00:13:06.436 START TEST nvmf_abort 00:13:06.436 ************************************ 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:06.436 * Looking for test storage... 00:13:06.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.436 13:48:43 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.437 13:48:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.341 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.341 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:08.341 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:08.341 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:08.341 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:08.341 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:08.341 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:08.341 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:08.341 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:08.341 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:08.341 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:08.342 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:08.342 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:08.342 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:08.342 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.342 13:48:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:08.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:13:08.342 00:13:08.342 --- 10.0.0.2 ping statistics --- 00:13:08.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.342 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:13:08.342 00:13:08.342 --- 10.0.0.1 ping statistics --- 00:13:08.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.342 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1393476 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1393476 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 1393476 ']' 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.342 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:08.343 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.343 [2024-07-14 13:48:46.126870] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:08.343 [2024-07-14 13:48:46.126975] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.343 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.343 [2024-07-14 13:48:46.189460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:08.343 [2024-07-14 13:48:46.282699] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.343 [2024-07-14 13:48:46.282764] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.343 [2024-07-14 13:48:46.282781] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.343 [2024-07-14 13:48:46.282794] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.343 [2024-07-14 13:48:46.282805] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.343 [2024-07-14 13:48:46.282905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.343 [2024-07-14 13:48:46.282962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.343 [2024-07-14 13:48:46.282965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.601 [2024-07-14 13:48:46.426722] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.601 Malloc0 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.601 Delay0 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.601 [2024-07-14 13:48:46.495064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.601 13:48:46 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:08.601 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.601 [2024-07-14 13:48:46.559427] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:11.129 Initializing NVMe Controllers 00:13:11.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:11.129 controller IO queue size 128 less than required 00:13:11.129 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:11.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:11.129 Initialization complete. Launching workers. 00:13:11.129 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33074 00:13:11.129 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33135, failed to submit 62 00:13:11.129 success 33078, unsuccess 57, failed 0 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:11.129 rmmod nvme_tcp 00:13:11.129 rmmod nvme_fabrics 00:13:11.129 rmmod nvme_keyring 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1393476 ']' 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1393476 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 1393476 ']' 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 1393476 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1393476 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1393476' 00:13:11.129 killing process with pid 1393476 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 1393476 00:13:11.129 13:48:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 1393476 00:13:11.129 13:48:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:11.129 13:48:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:11.129 13:48:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:11.129 13:48:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.129 13:48:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.129 13:48:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.130 13:48:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.130 13:48:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.666 13:48:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:13.666 00:13:13.666 real 0m7.235s 00:13:13.666 user 0m10.558s 00:13:13.666 sys 0m2.572s 00:13:13.666 13:48:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:13.666 13:48:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:13.666 ************************************ 00:13:13.666 END TEST nvmf_abort 00:13:13.666 ************************************ 00:13:13.667 13:48:51 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:13.667 13:48:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:13.667 13:48:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:13.667 13:48:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:13.667 ************************************ 00:13:13.667 START TEST nvmf_ns_hotplug_stress 00:13:13.667 ************************************ 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:13.667 * Looking for test storage... 00:13:13.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.667 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.570 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.570 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:15.570 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:15.570 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:15.570 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:15.570 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:15.570 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:15.570 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:15.570 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:15.570 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:15.570 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:15.571 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:15.571 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:15.571 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:15.571 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:15.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:13:15.571 00:13:15.571 --- 10.0.0.2 ping statistics --- 00:13:15.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.571 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:13:15.571 00:13:15.571 --- 10.0.0.1 ping statistics --- 00:13:15.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.571 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1395814 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1395814 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 1395814 ']' 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.571 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:15.572 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.572 [2024-07-14 13:48:53.420853] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:15.572 [2024-07-14 13:48:53.420978] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.572 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.572 [2024-07-14 13:48:53.487040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.829 [2024-07-14 13:48:53.576991] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.829 [2024-07-14 13:48:53.577046] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.829 [2024-07-14 13:48:53.577060] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.829 [2024-07-14 13:48:53.577071] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.829 [2024-07-14 13:48:53.577081] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.829 [2024-07-14 13:48:53.577169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.829 [2024-07-14 13:48:53.577232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.829 [2024-07-14 13:48:53.577235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.829 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:15.829 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:15.829 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.829 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.829 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.829 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.829 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:15.829 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:16.086 [2024-07-14 13:48:53.982561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.086 13:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:16.343 13:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.599 [2024-07-14 13:48:54.485335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.599 13:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:16.856 13:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:17.114 Malloc0 00:13:17.114 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:17.371 Delay0 00:13:17.371 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.629 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:17.886 NULL1 00:13:17.886 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:18.143 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1396114 00:13:18.143 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:18.143 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:18.144 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.144 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.401 13:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.658 13:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:18.658 13:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:18.915 true 00:13:18.915 13:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:18.915 13:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.172 13:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.429 13:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:19.429 13:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:19.686 true 00:13:19.686 13:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:19.686 13:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.252 Read completed with error (sct=0, sc=11) 00:13:20.511 13:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:20.511 13:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:20.511 13:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:20.769 true 00:13:20.769 13:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:20.769 13:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.026 13:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.283 13:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:21.283 13:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:21.541 true 00:13:21.541 13:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:21.541 13:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.929 13:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.929 13:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:22.929 13:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:23.187 true 00:13:23.187 13:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:23.187 13:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.445 13:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.703 13:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:23.703 13:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:23.961 true 00:13:23.961 13:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:23.961 13:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.251 13:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.508 13:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:24.508 13:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:24.508 true 00:13:24.769 13:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:24.769 13:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.714 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.714 13:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.972 13:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:25.972 13:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:26.230 true 00:13:26.230 13:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:26.230 13:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.169 13:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.427 13:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:27.427 13:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:27.685 true 00:13:27.685 13:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:27.685 13:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.945 13:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.204 13:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:28.204 13:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:28.204 true 00:13:28.204 13:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:28.204 13:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.140 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.397 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:29.397 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:29.654 true 00:13:29.654 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:29.654 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.912 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.170 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:30.170 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:30.428 true 00:13:30.428 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:30.428 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.364 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.622 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:31.622 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:31.879 true 00:13:31.880 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:31.880 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.138 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.397 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:32.397 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:32.656 true 00:13:32.656 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:32.656 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.593 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.593 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:33.593 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:33.850 true 00:13:33.850 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:33.850 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.108 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.367 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:34.367 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:34.625 true 00:13:34.625 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:34.625 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.560 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.818 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:35.818 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:36.077 true 00:13:36.077 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:36.078 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.336 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.336 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:36.336 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:36.592 true 00:13:36.592 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:36.592 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.528 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.787 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:37.787 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:38.045 true 00:13:38.045 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:38.045 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.303 13:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.562 13:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:38.562 13:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:38.820 true 00:13:38.820 13:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:38.820 13:49:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.755 13:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.013 13:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:40.013 13:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:40.278 true 00:13:40.278 13:49:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:40.278 13:49:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.580 13:49:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.838 13:49:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:40.838 13:49:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:41.096 true 00:13:41.096 13:49:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:41.096 13:49:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.032 13:49:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.290 13:49:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:42.290 13:49:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:42.548 true 00:13:42.548 13:49:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:42.548 13:49:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.805 13:49:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.063 13:49:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:43.063 13:49:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:43.322 true 00:13:43.322 13:49:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:43.322 13:49:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.258 13:49:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.517 13:49:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:44.517 13:49:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:44.774 true 00:13:44.774 13:49:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:44.774 13:49:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.032 13:49:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.289 13:49:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:45.289 13:49:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:45.547 true 00:13:45.547 13:49:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:45.547 13:49:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.804 13:49:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.061 13:49:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:46.061 13:49:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:46.318 true 00:13:46.319 13:49:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:46.319 13:49:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.254 13:49:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.511 13:49:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:47.511 13:49:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:47.768 true 00:13:47.768 13:49:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:47.768 13:49:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.025 13:49:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.282 13:49:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:48.282 13:49:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:48.540 true 00:13:48.540 13:49:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:48.540 13:49:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.540 Initializing NVMe Controllers 00:13:48.540 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:48.540 Controller IO queue size 128, less than required. 00:13:48.540 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:48.540 Controller IO queue size 128, less than required. 00:13:48.540 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:48.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:48.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:48.540 Initialization complete. Launching workers. 00:13:48.540 ======================================================== 00:13:48.540 Latency(us) 00:13:48.540 Device Information : IOPS MiB/s Average min max 00:13:48.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 708.98 0.35 87497.42 3207.31 1025246.99 00:13:48.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10241.01 5.00 12461.46 1630.42 456065.75 00:13:48.540 ======================================================== 00:13:48.540 Total : 10949.99 5.35 17319.82 1630.42 1025246.99 00:13:48.540 00:13:48.798 13:49:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.055 13:49:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:49.055 13:49:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:49.313 true 00:13:49.313 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1396114 00:13:49.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1396114) - No such process 00:13:49.313 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1396114 00:13:49.313 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.570 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:49.826 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:49.826 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:49.826 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:49.826 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.826 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:50.084 null0 00:13:50.084 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:50.084 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:50.084 13:49:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:50.345 null1 00:13:50.345 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:50.345 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:50.345 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:50.345 null2 00:13:50.604 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:50.604 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:50.604 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:50.604 null3 00:13:50.604 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:50.604 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:50.604 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:50.862 null4 00:13:50.862 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:50.862 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:50.862 13:49:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:51.120 null5 00:13:51.120 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:51.120 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:51.120 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:51.377 null6 00:13:51.378 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:51.378 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:51.378 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:51.635 null7 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:51.635 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1400175 1400176 1400178 1400180 1400182 1400184 1400186 1400188 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.636 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.894 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.894 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.894 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.894 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.894 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.894 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.894 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.894 13:49:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.152 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.409 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.409 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.409 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.409 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.409 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.667 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.667 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.667 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.667 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.667 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.667 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.667 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.667 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.667 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.926 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:53.183 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:53.183 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:53.183 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:53.183 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:53.183 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:53.183 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.183 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:53.183 13:49:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.441 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:53.699 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:53.699 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:53.699 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:53.699 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:53.699 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:53.699 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.699 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:53.699 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.957 13:49:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:54.215 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.215 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:54.215 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:54.215 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.215 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:54.215 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:54.215 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:54.215 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.473 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:54.731 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.731 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:54.731 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:54.731 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:54.731 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.731 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:54.731 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:54.731 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.989 13:49:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:55.247 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.247 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:55.247 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.247 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:55.247 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:55.247 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:55.247 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.247 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.506 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:55.801 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:55.801 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.801 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.801 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:55.801 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.801 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:55.801 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:55.801 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.060 13:49:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.319 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.319 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.319 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.319 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.319 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.319 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.319 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:56.319 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.577 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.835 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.835 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.835 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.835 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.835 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.835 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.835 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.835 13:49:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.092 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.092 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.093 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:57.093 rmmod nvme_tcp 00:13:57.093 rmmod nvme_fabrics 00:13:57.350 rmmod nvme_keyring 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1395814 ']' 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1395814 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 1395814 ']' 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 1395814 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1395814 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1395814' 00:13:57.350 killing process with pid 1395814 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 1395814 00:13:57.350 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 1395814 00:13:57.609 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:57.609 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:57.609 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:57.609 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.609 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.609 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.609 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.609 13:49:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.509 13:49:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:59.509 00:13:59.509 real 0m46.224s 00:13:59.509 user 3m32.503s 00:13:59.509 sys 0m16.212s 00:13:59.509 13:49:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:59.509 13:49:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.509 ************************************ 00:13:59.509 END TEST nvmf_ns_hotplug_stress 00:13:59.509 ************************************ 00:13:59.509 13:49:37 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:59.509 13:49:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:59.509 13:49:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:59.509 13:49:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:59.509 ************************************ 00:13:59.509 START TEST nvmf_connect_stress 00:13:59.509 ************************************ 00:13:59.509 13:49:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:59.766 * Looking for test storage... 00:13:59.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.766 13:49:37 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.767 13:49:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:01.662 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:01.663 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:01.663 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:01.663 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:01.663 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:01.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:14:01.663 00:14:01.663 --- 10.0.0.2 ping statistics --- 00:14:01.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.663 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:01.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:14:01.663 00:14:01.663 --- 10.0.0.1 ping statistics --- 00:14:01.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.663 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1402927 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1402927 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 1402927 ']' 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:01.663 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.663 [2024-07-14 13:49:39.595604] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:01.663 [2024-07-14 13:49:39.595680] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.664 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.921 [2024-07-14 13:49:39.663595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:01.921 [2024-07-14 13:49:39.752066] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.921 [2024-07-14 13:49:39.752122] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.921 [2024-07-14 13:49:39.752150] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.921 [2024-07-14 13:49:39.752160] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.921 [2024-07-14 13:49:39.752170] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.921 [2024-07-14 13:49:39.752263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.921 [2024-07-14 13:49:39.752325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.921 [2024-07-14 13:49:39.752329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.921 [2024-07-14 13:49:39.883418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.921 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.177 [2024-07-14 13:49:39.918027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.177 NULL1 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1402956 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.177 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.178 13:49:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.446 13:49:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.446 13:49:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:02.446 13:49:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.446 13:49:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.446 13:49:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.703 13:49:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.703 13:49:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:02.703 13:49:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.703 13:49:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.703 13:49:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.960 13:49:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.960 13:49:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:02.960 13:49:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.960 13:49:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.960 13:49:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.524 13:49:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.524 13:49:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:03.524 13:49:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.524 13:49:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.524 13:49:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.781 13:49:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.781 13:49:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:03.781 13:49:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.781 13:49:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.781 13:49:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.038 13:49:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.038 13:49:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:04.038 13:49:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.038 13:49:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.038 13:49:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.296 13:49:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.296 13:49:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:04.296 13:49:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.296 13:49:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.296 13:49:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.860 13:49:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.860 13:49:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:04.860 13:49:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.860 13:49:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.860 13:49:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.117 13:49:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.117 13:49:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:05.117 13:49:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.117 13:49:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.117 13:49:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.374 13:49:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.374 13:49:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:05.374 13:49:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.374 13:49:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.374 13:49:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.631 13:49:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.631 13:49:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:05.631 13:49:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.631 13:49:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.631 13:49:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.888 13:49:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.888 13:49:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:05.888 13:49:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.888 13:49:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.888 13:49:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.451 13:49:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.451 13:49:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:06.451 13:49:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.451 13:49:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.451 13:49:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.708 13:49:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.708 13:49:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:06.708 13:49:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.708 13:49:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.708 13:49:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.965 13:49:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.965 13:49:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:06.965 13:49:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.965 13:49:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.965 13:49:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.222 13:49:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.222 13:49:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:07.222 13:49:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.222 13:49:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.222 13:49:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.479 13:49:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.479 13:49:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:07.479 13:49:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.479 13:49:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.479 13:49:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.042 13:49:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.042 13:49:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:08.043 13:49:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.043 13:49:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.043 13:49:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.300 13:49:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.300 13:49:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:08.300 13:49:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.300 13:49:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.300 13:49:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.557 13:49:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.557 13:49:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:08.557 13:49:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.557 13:49:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.557 13:49:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.814 13:49:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.814 13:49:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:08.814 13:49:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.814 13:49:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.814 13:49:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.071 13:49:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.071 13:49:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:09.071 13:49:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.071 13:49:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.071 13:49:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.634 13:49:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.634 13:49:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:09.634 13:49:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.634 13:49:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.634 13:49:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.892 13:49:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.892 13:49:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:09.892 13:49:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.892 13:49:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.892 13:49:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.149 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.149 13:49:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:10.149 13:49:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.149 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.149 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.406 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.406 13:49:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:10.406 13:49:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.406 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.406 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.971 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.971 13:49:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:10.971 13:49:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.971 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.971 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.229 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.229 13:49:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:11.229 13:49:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.229 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.229 13:49:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.486 13:49:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.486 13:49:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:11.486 13:49:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.486 13:49:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.486 13:49:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.744 13:49:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.744 13:49:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:11.744 13:49:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.744 13:49:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.744 13:49:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.073 13:49:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.073 13:49:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:12.073 13:49:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.073 13:49:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.073 13:49:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.352 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1402956 00:14:12.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1402956) - No such process 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1402956 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.352 rmmod nvme_tcp 00:14:12.352 rmmod nvme_fabrics 00:14:12.352 rmmod nvme_keyring 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1402927 ']' 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1402927 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 1402927 ']' 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 1402927 00:14:12.352 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:12.353 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:12.353 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1402927 00:14:12.353 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:12.353 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:12.353 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1402927' 00:14:12.353 killing process with pid 1402927 00:14:12.612 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 1402927 00:14:12.612 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 1402927 00:14:12.612 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.612 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:12.612 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:12.612 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.612 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.612 13:49:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.612 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.612 13:49:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.144 13:49:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.144 00:14:15.144 real 0m15.134s 00:14:15.144 user 0m38.387s 00:14:15.144 sys 0m5.720s 00:14:15.144 13:49:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:15.144 13:49:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.144 ************************************ 00:14:15.144 END TEST nvmf_connect_stress 00:14:15.144 ************************************ 00:14:15.144 13:49:52 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:15.144 13:49:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:15.144 13:49:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:15.144 13:49:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.144 ************************************ 00:14:15.144 START TEST nvmf_fused_ordering 00:14:15.144 ************************************ 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:15.144 * Looking for test storage... 00:14:15.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.144 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.145 13:49:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:17.044 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:17.044 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:17.044 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.044 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:17.045 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:14:17.045 00:14:17.045 --- 10.0.0.2 ping statistics --- 00:14:17.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.045 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:14:17.045 00:14:17.045 --- 10.0.0.1 ping statistics --- 00:14:17.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.045 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1406109 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1406109 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 1406109 ']' 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:17.045 13:49:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.045 [2024-07-14 13:49:54.870398] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:17.045 [2024-07-14 13:49:54.870480] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.045 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.045 [2024-07-14 13:49:54.938393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.304 [2024-07-14 13:49:55.025340] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.304 [2024-07-14 13:49:55.025396] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.304 [2024-07-14 13:49:55.025426] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.304 [2024-07-14 13:49:55.025438] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.304 [2024-07-14 13:49:55.025449] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.304 [2024-07-14 13:49:55.025477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.304 [2024-07-14 13:49:55.171352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.304 [2024-07-14 13:49:55.187544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.304 NULL1 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.304 13:49:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:17.304 [2024-07-14 13:49:55.233462] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:17.304 [2024-07-14 13:49:55.233502] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406240 ] 00:14:17.304 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.871 Attached to nqn.2016-06.io.spdk:cnode1 00:14:17.871 Namespace ID: 1 size: 1GB 00:14:17.871 fused_ordering(0) 00:14:17.871 fused_ordering(1) 00:14:17.871 fused_ordering(2) 00:14:17.871 fused_ordering(3) 00:14:17.871 fused_ordering(4) 00:14:17.871 fused_ordering(5) 00:14:17.871 fused_ordering(6) 00:14:17.871 fused_ordering(7) 00:14:17.871 fused_ordering(8) 00:14:17.871 fused_ordering(9) 00:14:17.871 fused_ordering(10) 00:14:17.871 fused_ordering(11) 00:14:17.871 fused_ordering(12) 00:14:17.871 fused_ordering(13) 00:14:17.871 fused_ordering(14) 00:14:17.871 fused_ordering(15) 00:14:17.871 fused_ordering(16) 00:14:17.871 fused_ordering(17) 00:14:17.871 fused_ordering(18) 00:14:17.871 fused_ordering(19) 00:14:17.871 fused_ordering(20) 00:14:17.871 fused_ordering(21) 00:14:17.871 fused_ordering(22) 00:14:17.871 fused_ordering(23) 00:14:17.871 fused_ordering(24) 00:14:17.871 fused_ordering(25) 00:14:17.871 fused_ordering(26) 00:14:17.871 fused_ordering(27) 00:14:17.871 fused_ordering(28) 00:14:17.871 fused_ordering(29) 00:14:17.871 fused_ordering(30) 00:14:17.871 fused_ordering(31) 00:14:17.871 fused_ordering(32) 00:14:17.871 fused_ordering(33) 00:14:17.871 fused_ordering(34) 00:14:17.871 fused_ordering(35) 00:14:17.871 fused_ordering(36) 00:14:17.871 fused_ordering(37) 00:14:17.871 fused_ordering(38) 00:14:17.871 fused_ordering(39) 00:14:17.871 fused_ordering(40) 00:14:17.871 fused_ordering(41) 00:14:17.871 fused_ordering(42) 00:14:17.871 fused_ordering(43) 00:14:17.871 fused_ordering(44) 00:14:17.871 fused_ordering(45) 00:14:17.871 fused_ordering(46) 00:14:17.871 fused_ordering(47) 00:14:17.871 fused_ordering(48) 00:14:17.871 fused_ordering(49) 00:14:17.871 fused_ordering(50) 00:14:17.871 fused_ordering(51) 00:14:17.871 fused_ordering(52) 00:14:17.871 fused_ordering(53) 00:14:17.871 fused_ordering(54) 00:14:17.871 fused_ordering(55) 00:14:17.871 fused_ordering(56) 00:14:17.871 fused_ordering(57) 00:14:17.871 fused_ordering(58) 00:14:17.871 fused_ordering(59) 00:14:17.871 fused_ordering(60) 00:14:17.871 fused_ordering(61) 00:14:17.871 fused_ordering(62) 00:14:17.871 fused_ordering(63) 00:14:17.871 fused_ordering(64) 00:14:17.871 fused_ordering(65) 00:14:17.871 fused_ordering(66) 00:14:17.871 fused_ordering(67) 00:14:17.871 fused_ordering(68) 00:14:17.871 fused_ordering(69) 00:14:17.871 fused_ordering(70) 00:14:17.871 fused_ordering(71) 00:14:17.871 fused_ordering(72) 00:14:17.871 fused_ordering(73) 00:14:17.871 fused_ordering(74) 00:14:17.871 fused_ordering(75) 00:14:17.871 fused_ordering(76) 00:14:17.871 fused_ordering(77) 00:14:17.871 fused_ordering(78) 00:14:17.871 fused_ordering(79) 00:14:17.871 fused_ordering(80) 00:14:17.871 fused_ordering(81) 00:14:17.871 fused_ordering(82) 00:14:17.871 fused_ordering(83) 00:14:17.871 fused_ordering(84) 00:14:17.871 fused_ordering(85) 00:14:17.871 fused_ordering(86) 00:14:17.871 fused_ordering(87) 00:14:17.871 fused_ordering(88) 00:14:17.871 fused_ordering(89) 00:14:17.871 fused_ordering(90) 00:14:17.871 fused_ordering(91) 00:14:17.871 fused_ordering(92) 00:14:17.871 fused_ordering(93) 00:14:17.871 fused_ordering(94) 00:14:17.871 fused_ordering(95) 00:14:17.871 fused_ordering(96) 00:14:17.871 fused_ordering(97) 00:14:17.871 fused_ordering(98) 00:14:17.871 fused_ordering(99) 00:14:17.871 fused_ordering(100) 00:14:17.871 fused_ordering(101) 00:14:17.871 fused_ordering(102) 00:14:17.871 fused_ordering(103) 00:14:17.871 fused_ordering(104) 00:14:17.871 fused_ordering(105) 00:14:17.871 fused_ordering(106) 00:14:17.871 fused_ordering(107) 00:14:17.871 fused_ordering(108) 00:14:17.871 fused_ordering(109) 00:14:17.871 fused_ordering(110) 00:14:17.871 fused_ordering(111) 00:14:17.871 fused_ordering(112) 00:14:17.871 fused_ordering(113) 00:14:17.871 fused_ordering(114) 00:14:17.871 fused_ordering(115) 00:14:17.871 fused_ordering(116) 00:14:17.871 fused_ordering(117) 00:14:17.871 fused_ordering(118) 00:14:17.871 fused_ordering(119) 00:14:17.871 fused_ordering(120) 00:14:17.871 fused_ordering(121) 00:14:17.871 fused_ordering(122) 00:14:17.871 fused_ordering(123) 00:14:17.871 fused_ordering(124) 00:14:17.871 fused_ordering(125) 00:14:17.871 fused_ordering(126) 00:14:17.871 fused_ordering(127) 00:14:17.871 fused_ordering(128) 00:14:17.871 fused_ordering(129) 00:14:17.871 fused_ordering(130) 00:14:17.871 fused_ordering(131) 00:14:17.871 fused_ordering(132) 00:14:17.871 fused_ordering(133) 00:14:17.871 fused_ordering(134) 00:14:17.871 fused_ordering(135) 00:14:17.871 fused_ordering(136) 00:14:17.871 fused_ordering(137) 00:14:17.871 fused_ordering(138) 00:14:17.871 fused_ordering(139) 00:14:17.871 fused_ordering(140) 00:14:17.871 fused_ordering(141) 00:14:17.871 fused_ordering(142) 00:14:17.871 fused_ordering(143) 00:14:17.871 fused_ordering(144) 00:14:17.871 fused_ordering(145) 00:14:17.871 fused_ordering(146) 00:14:17.871 fused_ordering(147) 00:14:17.871 fused_ordering(148) 00:14:17.871 fused_ordering(149) 00:14:17.871 fused_ordering(150) 00:14:17.871 fused_ordering(151) 00:14:17.871 fused_ordering(152) 00:14:17.871 fused_ordering(153) 00:14:17.871 fused_ordering(154) 00:14:17.871 fused_ordering(155) 00:14:17.871 fused_ordering(156) 00:14:17.871 fused_ordering(157) 00:14:17.871 fused_ordering(158) 00:14:17.871 fused_ordering(159) 00:14:17.871 fused_ordering(160) 00:14:17.871 fused_ordering(161) 00:14:17.871 fused_ordering(162) 00:14:17.871 fused_ordering(163) 00:14:17.871 fused_ordering(164) 00:14:17.872 fused_ordering(165) 00:14:17.872 fused_ordering(166) 00:14:17.872 fused_ordering(167) 00:14:17.872 fused_ordering(168) 00:14:17.872 fused_ordering(169) 00:14:17.872 fused_ordering(170) 00:14:17.872 fused_ordering(171) 00:14:17.872 fused_ordering(172) 00:14:17.872 fused_ordering(173) 00:14:17.872 fused_ordering(174) 00:14:17.872 fused_ordering(175) 00:14:17.872 fused_ordering(176) 00:14:17.872 fused_ordering(177) 00:14:17.872 fused_ordering(178) 00:14:17.872 fused_ordering(179) 00:14:17.872 fused_ordering(180) 00:14:17.872 fused_ordering(181) 00:14:17.872 fused_ordering(182) 00:14:17.872 fused_ordering(183) 00:14:17.872 fused_ordering(184) 00:14:17.872 fused_ordering(185) 00:14:17.872 fused_ordering(186) 00:14:17.872 fused_ordering(187) 00:14:17.872 fused_ordering(188) 00:14:17.872 fused_ordering(189) 00:14:17.872 fused_ordering(190) 00:14:17.872 fused_ordering(191) 00:14:17.872 fused_ordering(192) 00:14:17.872 fused_ordering(193) 00:14:17.872 fused_ordering(194) 00:14:17.872 fused_ordering(195) 00:14:17.872 fused_ordering(196) 00:14:17.872 fused_ordering(197) 00:14:17.872 fused_ordering(198) 00:14:17.872 fused_ordering(199) 00:14:17.872 fused_ordering(200) 00:14:17.872 fused_ordering(201) 00:14:17.872 fused_ordering(202) 00:14:17.872 fused_ordering(203) 00:14:17.872 fused_ordering(204) 00:14:17.872 fused_ordering(205) 00:14:18.130 fused_ordering(206) 00:14:18.130 fused_ordering(207) 00:14:18.130 fused_ordering(208) 00:14:18.130 fused_ordering(209) 00:14:18.130 fused_ordering(210) 00:14:18.130 fused_ordering(211) 00:14:18.130 fused_ordering(212) 00:14:18.130 fused_ordering(213) 00:14:18.130 fused_ordering(214) 00:14:18.130 fused_ordering(215) 00:14:18.130 fused_ordering(216) 00:14:18.130 fused_ordering(217) 00:14:18.130 fused_ordering(218) 00:14:18.130 fused_ordering(219) 00:14:18.130 fused_ordering(220) 00:14:18.130 fused_ordering(221) 00:14:18.130 fused_ordering(222) 00:14:18.130 fused_ordering(223) 00:14:18.130 fused_ordering(224) 00:14:18.130 fused_ordering(225) 00:14:18.130 fused_ordering(226) 00:14:18.130 fused_ordering(227) 00:14:18.130 fused_ordering(228) 00:14:18.130 fused_ordering(229) 00:14:18.130 fused_ordering(230) 00:14:18.130 fused_ordering(231) 00:14:18.130 fused_ordering(232) 00:14:18.130 fused_ordering(233) 00:14:18.130 fused_ordering(234) 00:14:18.130 fused_ordering(235) 00:14:18.130 fused_ordering(236) 00:14:18.130 fused_ordering(237) 00:14:18.130 fused_ordering(238) 00:14:18.130 fused_ordering(239) 00:14:18.130 fused_ordering(240) 00:14:18.130 fused_ordering(241) 00:14:18.130 fused_ordering(242) 00:14:18.130 fused_ordering(243) 00:14:18.130 fused_ordering(244) 00:14:18.130 fused_ordering(245) 00:14:18.130 fused_ordering(246) 00:14:18.130 fused_ordering(247) 00:14:18.130 fused_ordering(248) 00:14:18.130 fused_ordering(249) 00:14:18.130 fused_ordering(250) 00:14:18.130 fused_ordering(251) 00:14:18.130 fused_ordering(252) 00:14:18.130 fused_ordering(253) 00:14:18.130 fused_ordering(254) 00:14:18.130 fused_ordering(255) 00:14:18.130 fused_ordering(256) 00:14:18.130 fused_ordering(257) 00:14:18.130 fused_ordering(258) 00:14:18.130 fused_ordering(259) 00:14:18.130 fused_ordering(260) 00:14:18.130 fused_ordering(261) 00:14:18.130 fused_ordering(262) 00:14:18.130 fused_ordering(263) 00:14:18.130 fused_ordering(264) 00:14:18.130 fused_ordering(265) 00:14:18.130 fused_ordering(266) 00:14:18.130 fused_ordering(267) 00:14:18.130 fused_ordering(268) 00:14:18.130 fused_ordering(269) 00:14:18.130 fused_ordering(270) 00:14:18.130 fused_ordering(271) 00:14:18.130 fused_ordering(272) 00:14:18.130 fused_ordering(273) 00:14:18.130 fused_ordering(274) 00:14:18.130 fused_ordering(275) 00:14:18.130 fused_ordering(276) 00:14:18.130 fused_ordering(277) 00:14:18.130 fused_ordering(278) 00:14:18.130 fused_ordering(279) 00:14:18.130 fused_ordering(280) 00:14:18.130 fused_ordering(281) 00:14:18.130 fused_ordering(282) 00:14:18.130 fused_ordering(283) 00:14:18.130 fused_ordering(284) 00:14:18.130 fused_ordering(285) 00:14:18.130 fused_ordering(286) 00:14:18.130 fused_ordering(287) 00:14:18.130 fused_ordering(288) 00:14:18.130 fused_ordering(289) 00:14:18.130 fused_ordering(290) 00:14:18.130 fused_ordering(291) 00:14:18.130 fused_ordering(292) 00:14:18.130 fused_ordering(293) 00:14:18.130 fused_ordering(294) 00:14:18.130 fused_ordering(295) 00:14:18.130 fused_ordering(296) 00:14:18.130 fused_ordering(297) 00:14:18.130 fused_ordering(298) 00:14:18.130 fused_ordering(299) 00:14:18.130 fused_ordering(300) 00:14:18.130 fused_ordering(301) 00:14:18.130 fused_ordering(302) 00:14:18.130 fused_ordering(303) 00:14:18.130 fused_ordering(304) 00:14:18.130 fused_ordering(305) 00:14:18.130 fused_ordering(306) 00:14:18.130 fused_ordering(307) 00:14:18.130 fused_ordering(308) 00:14:18.130 fused_ordering(309) 00:14:18.130 fused_ordering(310) 00:14:18.130 fused_ordering(311) 00:14:18.130 fused_ordering(312) 00:14:18.130 fused_ordering(313) 00:14:18.130 fused_ordering(314) 00:14:18.130 fused_ordering(315) 00:14:18.130 fused_ordering(316) 00:14:18.130 fused_ordering(317) 00:14:18.130 fused_ordering(318) 00:14:18.130 fused_ordering(319) 00:14:18.130 fused_ordering(320) 00:14:18.130 fused_ordering(321) 00:14:18.130 fused_ordering(322) 00:14:18.130 fused_ordering(323) 00:14:18.130 fused_ordering(324) 00:14:18.130 fused_ordering(325) 00:14:18.130 fused_ordering(326) 00:14:18.130 fused_ordering(327) 00:14:18.130 fused_ordering(328) 00:14:18.130 fused_ordering(329) 00:14:18.130 fused_ordering(330) 00:14:18.130 fused_ordering(331) 00:14:18.130 fused_ordering(332) 00:14:18.130 fused_ordering(333) 00:14:18.130 fused_ordering(334) 00:14:18.130 fused_ordering(335) 00:14:18.130 fused_ordering(336) 00:14:18.130 fused_ordering(337) 00:14:18.130 fused_ordering(338) 00:14:18.130 fused_ordering(339) 00:14:18.130 fused_ordering(340) 00:14:18.130 fused_ordering(341) 00:14:18.130 fused_ordering(342) 00:14:18.130 fused_ordering(343) 00:14:18.130 fused_ordering(344) 00:14:18.130 fused_ordering(345) 00:14:18.130 fused_ordering(346) 00:14:18.130 fused_ordering(347) 00:14:18.130 fused_ordering(348) 00:14:18.130 fused_ordering(349) 00:14:18.130 fused_ordering(350) 00:14:18.130 fused_ordering(351) 00:14:18.130 fused_ordering(352) 00:14:18.130 fused_ordering(353) 00:14:18.130 fused_ordering(354) 00:14:18.130 fused_ordering(355) 00:14:18.130 fused_ordering(356) 00:14:18.130 fused_ordering(357) 00:14:18.130 fused_ordering(358) 00:14:18.130 fused_ordering(359) 00:14:18.130 fused_ordering(360) 00:14:18.130 fused_ordering(361) 00:14:18.130 fused_ordering(362) 00:14:18.130 fused_ordering(363) 00:14:18.130 fused_ordering(364) 00:14:18.130 fused_ordering(365) 00:14:18.130 fused_ordering(366) 00:14:18.130 fused_ordering(367) 00:14:18.130 fused_ordering(368) 00:14:18.130 fused_ordering(369) 00:14:18.130 fused_ordering(370) 00:14:18.130 fused_ordering(371) 00:14:18.130 fused_ordering(372) 00:14:18.130 fused_ordering(373) 00:14:18.130 fused_ordering(374) 00:14:18.130 fused_ordering(375) 00:14:18.130 fused_ordering(376) 00:14:18.130 fused_ordering(377) 00:14:18.130 fused_ordering(378) 00:14:18.130 fused_ordering(379) 00:14:18.130 fused_ordering(380) 00:14:18.130 fused_ordering(381) 00:14:18.130 fused_ordering(382) 00:14:18.130 fused_ordering(383) 00:14:18.130 fused_ordering(384) 00:14:18.130 fused_ordering(385) 00:14:18.130 fused_ordering(386) 00:14:18.130 fused_ordering(387) 00:14:18.130 fused_ordering(388) 00:14:18.130 fused_ordering(389) 00:14:18.130 fused_ordering(390) 00:14:18.130 fused_ordering(391) 00:14:18.130 fused_ordering(392) 00:14:18.130 fused_ordering(393) 00:14:18.130 fused_ordering(394) 00:14:18.130 fused_ordering(395) 00:14:18.130 fused_ordering(396) 00:14:18.130 fused_ordering(397) 00:14:18.130 fused_ordering(398) 00:14:18.130 fused_ordering(399) 00:14:18.130 fused_ordering(400) 00:14:18.130 fused_ordering(401) 00:14:18.130 fused_ordering(402) 00:14:18.130 fused_ordering(403) 00:14:18.130 fused_ordering(404) 00:14:18.130 fused_ordering(405) 00:14:18.130 fused_ordering(406) 00:14:18.130 fused_ordering(407) 00:14:18.130 fused_ordering(408) 00:14:18.130 fused_ordering(409) 00:14:18.130 fused_ordering(410) 00:14:18.697 fused_ordering(411) 00:14:18.697 fused_ordering(412) 00:14:18.697 fused_ordering(413) 00:14:18.697 fused_ordering(414) 00:14:18.697 fused_ordering(415) 00:14:18.697 fused_ordering(416) 00:14:18.697 fused_ordering(417) 00:14:18.697 fused_ordering(418) 00:14:18.697 fused_ordering(419) 00:14:18.697 fused_ordering(420) 00:14:18.697 fused_ordering(421) 00:14:18.697 fused_ordering(422) 00:14:18.697 fused_ordering(423) 00:14:18.697 fused_ordering(424) 00:14:18.697 fused_ordering(425) 00:14:18.697 fused_ordering(426) 00:14:18.697 fused_ordering(427) 00:14:18.697 fused_ordering(428) 00:14:18.697 fused_ordering(429) 00:14:18.697 fused_ordering(430) 00:14:18.697 fused_ordering(431) 00:14:18.697 fused_ordering(432) 00:14:18.697 fused_ordering(433) 00:14:18.697 fused_ordering(434) 00:14:18.697 fused_ordering(435) 00:14:18.697 fused_ordering(436) 00:14:18.697 fused_ordering(437) 00:14:18.697 fused_ordering(438) 00:14:18.697 fused_ordering(439) 00:14:18.697 fused_ordering(440) 00:14:18.697 fused_ordering(441) 00:14:18.697 fused_ordering(442) 00:14:18.697 fused_ordering(443) 00:14:18.697 fused_ordering(444) 00:14:18.697 fused_ordering(445) 00:14:18.697 fused_ordering(446) 00:14:18.697 fused_ordering(447) 00:14:18.697 fused_ordering(448) 00:14:18.697 fused_ordering(449) 00:14:18.697 fused_ordering(450) 00:14:18.697 fused_ordering(451) 00:14:18.697 fused_ordering(452) 00:14:18.697 fused_ordering(453) 00:14:18.697 fused_ordering(454) 00:14:18.697 fused_ordering(455) 00:14:18.697 fused_ordering(456) 00:14:18.697 fused_ordering(457) 00:14:18.697 fused_ordering(458) 00:14:18.697 fused_ordering(459) 00:14:18.697 fused_ordering(460) 00:14:18.697 fused_ordering(461) 00:14:18.697 fused_ordering(462) 00:14:18.697 fused_ordering(463) 00:14:18.697 fused_ordering(464) 00:14:18.697 fused_ordering(465) 00:14:18.697 fused_ordering(466) 00:14:18.697 fused_ordering(467) 00:14:18.697 fused_ordering(468) 00:14:18.697 fused_ordering(469) 00:14:18.697 fused_ordering(470) 00:14:18.697 fused_ordering(471) 00:14:18.697 fused_ordering(472) 00:14:18.697 fused_ordering(473) 00:14:18.697 fused_ordering(474) 00:14:18.697 fused_ordering(475) 00:14:18.697 fused_ordering(476) 00:14:18.697 fused_ordering(477) 00:14:18.697 fused_ordering(478) 00:14:18.697 fused_ordering(479) 00:14:18.697 fused_ordering(480) 00:14:18.697 fused_ordering(481) 00:14:18.697 fused_ordering(482) 00:14:18.697 fused_ordering(483) 00:14:18.697 fused_ordering(484) 00:14:18.697 fused_ordering(485) 00:14:18.697 fused_ordering(486) 00:14:18.697 fused_ordering(487) 00:14:18.697 fused_ordering(488) 00:14:18.697 fused_ordering(489) 00:14:18.697 fused_ordering(490) 00:14:18.697 fused_ordering(491) 00:14:18.697 fused_ordering(492) 00:14:18.697 fused_ordering(493) 00:14:18.697 fused_ordering(494) 00:14:18.697 fused_ordering(495) 00:14:18.697 fused_ordering(496) 00:14:18.697 fused_ordering(497) 00:14:18.697 fused_ordering(498) 00:14:18.697 fused_ordering(499) 00:14:18.697 fused_ordering(500) 00:14:18.697 fused_ordering(501) 00:14:18.697 fused_ordering(502) 00:14:18.697 fused_ordering(503) 00:14:18.697 fused_ordering(504) 00:14:18.697 fused_ordering(505) 00:14:18.697 fused_ordering(506) 00:14:18.697 fused_ordering(507) 00:14:18.697 fused_ordering(508) 00:14:18.697 fused_ordering(509) 00:14:18.697 fused_ordering(510) 00:14:18.697 fused_ordering(511) 00:14:18.697 fused_ordering(512) 00:14:18.697 fused_ordering(513) 00:14:18.697 fused_ordering(514) 00:14:18.697 fused_ordering(515) 00:14:18.697 fused_ordering(516) 00:14:18.697 fused_ordering(517) 00:14:18.697 fused_ordering(518) 00:14:18.697 fused_ordering(519) 00:14:18.697 fused_ordering(520) 00:14:18.697 fused_ordering(521) 00:14:18.697 fused_ordering(522) 00:14:18.697 fused_ordering(523) 00:14:18.697 fused_ordering(524) 00:14:18.697 fused_ordering(525) 00:14:18.697 fused_ordering(526) 00:14:18.697 fused_ordering(527) 00:14:18.697 fused_ordering(528) 00:14:18.697 fused_ordering(529) 00:14:18.697 fused_ordering(530) 00:14:18.697 fused_ordering(531) 00:14:18.697 fused_ordering(532) 00:14:18.697 fused_ordering(533) 00:14:18.697 fused_ordering(534) 00:14:18.697 fused_ordering(535) 00:14:18.697 fused_ordering(536) 00:14:18.697 fused_ordering(537) 00:14:18.697 fused_ordering(538) 00:14:18.697 fused_ordering(539) 00:14:18.697 fused_ordering(540) 00:14:18.697 fused_ordering(541) 00:14:18.697 fused_ordering(542) 00:14:18.697 fused_ordering(543) 00:14:18.697 fused_ordering(544) 00:14:18.697 fused_ordering(545) 00:14:18.697 fused_ordering(546) 00:14:18.697 fused_ordering(547) 00:14:18.697 fused_ordering(548) 00:14:18.697 fused_ordering(549) 00:14:18.697 fused_ordering(550) 00:14:18.697 fused_ordering(551) 00:14:18.697 fused_ordering(552) 00:14:18.697 fused_ordering(553) 00:14:18.697 fused_ordering(554) 00:14:18.697 fused_ordering(555) 00:14:18.697 fused_ordering(556) 00:14:18.697 fused_ordering(557) 00:14:18.697 fused_ordering(558) 00:14:18.697 fused_ordering(559) 00:14:18.697 fused_ordering(560) 00:14:18.697 fused_ordering(561) 00:14:18.697 fused_ordering(562) 00:14:18.697 fused_ordering(563) 00:14:18.697 fused_ordering(564) 00:14:18.697 fused_ordering(565) 00:14:18.697 fused_ordering(566) 00:14:18.697 fused_ordering(567) 00:14:18.697 fused_ordering(568) 00:14:18.697 fused_ordering(569) 00:14:18.697 fused_ordering(570) 00:14:18.697 fused_ordering(571) 00:14:18.697 fused_ordering(572) 00:14:18.697 fused_ordering(573) 00:14:18.697 fused_ordering(574) 00:14:18.697 fused_ordering(575) 00:14:18.697 fused_ordering(576) 00:14:18.698 fused_ordering(577) 00:14:18.698 fused_ordering(578) 00:14:18.698 fused_ordering(579) 00:14:18.698 fused_ordering(580) 00:14:18.698 fused_ordering(581) 00:14:18.698 fused_ordering(582) 00:14:18.698 fused_ordering(583) 00:14:18.698 fused_ordering(584) 00:14:18.698 fused_ordering(585) 00:14:18.698 fused_ordering(586) 00:14:18.698 fused_ordering(587) 00:14:18.698 fused_ordering(588) 00:14:18.698 fused_ordering(589) 00:14:18.698 fused_ordering(590) 00:14:18.698 fused_ordering(591) 00:14:18.698 fused_ordering(592) 00:14:18.698 fused_ordering(593) 00:14:18.698 fused_ordering(594) 00:14:18.698 fused_ordering(595) 00:14:18.698 fused_ordering(596) 00:14:18.698 fused_ordering(597) 00:14:18.698 fused_ordering(598) 00:14:18.698 fused_ordering(599) 00:14:18.698 fused_ordering(600) 00:14:18.698 fused_ordering(601) 00:14:18.698 fused_ordering(602) 00:14:18.698 fused_ordering(603) 00:14:18.698 fused_ordering(604) 00:14:18.698 fused_ordering(605) 00:14:18.698 fused_ordering(606) 00:14:18.698 fused_ordering(607) 00:14:18.698 fused_ordering(608) 00:14:18.698 fused_ordering(609) 00:14:18.698 fused_ordering(610) 00:14:18.698 fused_ordering(611) 00:14:18.698 fused_ordering(612) 00:14:18.698 fused_ordering(613) 00:14:18.698 fused_ordering(614) 00:14:18.698 fused_ordering(615) 00:14:19.266 fused_ordering(616) 00:14:19.266 fused_ordering(617) 00:14:19.266 fused_ordering(618) 00:14:19.266 fused_ordering(619) 00:14:19.266 fused_ordering(620) 00:14:19.266 fused_ordering(621) 00:14:19.266 fused_ordering(622) 00:14:19.266 fused_ordering(623) 00:14:19.266 fused_ordering(624) 00:14:19.266 fused_ordering(625) 00:14:19.266 fused_ordering(626) 00:14:19.266 fused_ordering(627) 00:14:19.266 fused_ordering(628) 00:14:19.266 fused_ordering(629) 00:14:19.266 fused_ordering(630) 00:14:19.266 fused_ordering(631) 00:14:19.266 fused_ordering(632) 00:14:19.266 fused_ordering(633) 00:14:19.266 fused_ordering(634) 00:14:19.266 fused_ordering(635) 00:14:19.266 fused_ordering(636) 00:14:19.266 fused_ordering(637) 00:14:19.266 fused_ordering(638) 00:14:19.266 fused_ordering(639) 00:14:19.266 fused_ordering(640) 00:14:19.266 fused_ordering(641) 00:14:19.266 fused_ordering(642) 00:14:19.266 fused_ordering(643) 00:14:19.266 fused_ordering(644) 00:14:19.266 fused_ordering(645) 00:14:19.266 fused_ordering(646) 00:14:19.266 fused_ordering(647) 00:14:19.266 fused_ordering(648) 00:14:19.266 fused_ordering(649) 00:14:19.266 fused_ordering(650) 00:14:19.266 fused_ordering(651) 00:14:19.266 fused_ordering(652) 00:14:19.266 fused_ordering(653) 00:14:19.266 fused_ordering(654) 00:14:19.266 fused_ordering(655) 00:14:19.266 fused_ordering(656) 00:14:19.266 fused_ordering(657) 00:14:19.266 fused_ordering(658) 00:14:19.266 fused_ordering(659) 00:14:19.266 fused_ordering(660) 00:14:19.266 fused_ordering(661) 00:14:19.266 fused_ordering(662) 00:14:19.266 fused_ordering(663) 00:14:19.266 fused_ordering(664) 00:14:19.266 fused_ordering(665) 00:14:19.266 fused_ordering(666) 00:14:19.266 fused_ordering(667) 00:14:19.266 fused_ordering(668) 00:14:19.266 fused_ordering(669) 00:14:19.266 fused_ordering(670) 00:14:19.266 fused_ordering(671) 00:14:19.266 fused_ordering(672) 00:14:19.266 fused_ordering(673) 00:14:19.266 fused_ordering(674) 00:14:19.266 fused_ordering(675) 00:14:19.266 fused_ordering(676) 00:14:19.266 fused_ordering(677) 00:14:19.266 fused_ordering(678) 00:14:19.266 fused_ordering(679) 00:14:19.266 fused_ordering(680) 00:14:19.266 fused_ordering(681) 00:14:19.266 fused_ordering(682) 00:14:19.266 fused_ordering(683) 00:14:19.266 fused_ordering(684) 00:14:19.266 fused_ordering(685) 00:14:19.266 fused_ordering(686) 00:14:19.266 fused_ordering(687) 00:14:19.266 fused_ordering(688) 00:14:19.266 fused_ordering(689) 00:14:19.266 fused_ordering(690) 00:14:19.266 fused_ordering(691) 00:14:19.266 fused_ordering(692) 00:14:19.266 fused_ordering(693) 00:14:19.266 fused_ordering(694) 00:14:19.266 fused_ordering(695) 00:14:19.266 fused_ordering(696) 00:14:19.266 fused_ordering(697) 00:14:19.266 fused_ordering(698) 00:14:19.266 fused_ordering(699) 00:14:19.266 fused_ordering(700) 00:14:19.266 fused_ordering(701) 00:14:19.266 fused_ordering(702) 00:14:19.266 fused_ordering(703) 00:14:19.266 fused_ordering(704) 00:14:19.266 fused_ordering(705) 00:14:19.266 fused_ordering(706) 00:14:19.266 fused_ordering(707) 00:14:19.266 fused_ordering(708) 00:14:19.266 fused_ordering(709) 00:14:19.266 fused_ordering(710) 00:14:19.266 fused_ordering(711) 00:14:19.266 fused_ordering(712) 00:14:19.266 fused_ordering(713) 00:14:19.266 fused_ordering(714) 00:14:19.266 fused_ordering(715) 00:14:19.266 fused_ordering(716) 00:14:19.266 fused_ordering(717) 00:14:19.266 fused_ordering(718) 00:14:19.266 fused_ordering(719) 00:14:19.266 fused_ordering(720) 00:14:19.266 fused_ordering(721) 00:14:19.266 fused_ordering(722) 00:14:19.266 fused_ordering(723) 00:14:19.266 fused_ordering(724) 00:14:19.266 fused_ordering(725) 00:14:19.267 fused_ordering(726) 00:14:19.267 fused_ordering(727) 00:14:19.267 fused_ordering(728) 00:14:19.267 fused_ordering(729) 00:14:19.267 fused_ordering(730) 00:14:19.267 fused_ordering(731) 00:14:19.267 fused_ordering(732) 00:14:19.267 fused_ordering(733) 00:14:19.267 fused_ordering(734) 00:14:19.267 fused_ordering(735) 00:14:19.267 fused_ordering(736) 00:14:19.267 fused_ordering(737) 00:14:19.267 fused_ordering(738) 00:14:19.267 fused_ordering(739) 00:14:19.267 fused_ordering(740) 00:14:19.267 fused_ordering(741) 00:14:19.267 fused_ordering(742) 00:14:19.267 fused_ordering(743) 00:14:19.267 fused_ordering(744) 00:14:19.267 fused_ordering(745) 00:14:19.267 fused_ordering(746) 00:14:19.267 fused_ordering(747) 00:14:19.267 fused_ordering(748) 00:14:19.267 fused_ordering(749) 00:14:19.267 fused_ordering(750) 00:14:19.267 fused_ordering(751) 00:14:19.267 fused_ordering(752) 00:14:19.267 fused_ordering(753) 00:14:19.267 fused_ordering(754) 00:14:19.267 fused_ordering(755) 00:14:19.267 fused_ordering(756) 00:14:19.267 fused_ordering(757) 00:14:19.267 fused_ordering(758) 00:14:19.267 fused_ordering(759) 00:14:19.267 fused_ordering(760) 00:14:19.267 fused_ordering(761) 00:14:19.267 fused_ordering(762) 00:14:19.267 fused_ordering(763) 00:14:19.267 fused_ordering(764) 00:14:19.267 fused_ordering(765) 00:14:19.267 fused_ordering(766) 00:14:19.267 fused_ordering(767) 00:14:19.267 fused_ordering(768) 00:14:19.267 fused_ordering(769) 00:14:19.267 fused_ordering(770) 00:14:19.267 fused_ordering(771) 00:14:19.267 fused_ordering(772) 00:14:19.267 fused_ordering(773) 00:14:19.267 fused_ordering(774) 00:14:19.267 fused_ordering(775) 00:14:19.267 fused_ordering(776) 00:14:19.267 fused_ordering(777) 00:14:19.267 fused_ordering(778) 00:14:19.267 fused_ordering(779) 00:14:19.267 fused_ordering(780) 00:14:19.267 fused_ordering(781) 00:14:19.267 fused_ordering(782) 00:14:19.267 fused_ordering(783) 00:14:19.267 fused_ordering(784) 00:14:19.267 fused_ordering(785) 00:14:19.267 fused_ordering(786) 00:14:19.267 fused_ordering(787) 00:14:19.267 fused_ordering(788) 00:14:19.267 fused_ordering(789) 00:14:19.267 fused_ordering(790) 00:14:19.267 fused_ordering(791) 00:14:19.267 fused_ordering(792) 00:14:19.267 fused_ordering(793) 00:14:19.267 fused_ordering(794) 00:14:19.267 fused_ordering(795) 00:14:19.267 fused_ordering(796) 00:14:19.267 fused_ordering(797) 00:14:19.267 fused_ordering(798) 00:14:19.267 fused_ordering(799) 00:14:19.267 fused_ordering(800) 00:14:19.267 fused_ordering(801) 00:14:19.267 fused_ordering(802) 00:14:19.267 fused_ordering(803) 00:14:19.267 fused_ordering(804) 00:14:19.267 fused_ordering(805) 00:14:19.267 fused_ordering(806) 00:14:19.267 fused_ordering(807) 00:14:19.267 fused_ordering(808) 00:14:19.267 fused_ordering(809) 00:14:19.267 fused_ordering(810) 00:14:19.267 fused_ordering(811) 00:14:19.267 fused_ordering(812) 00:14:19.267 fused_ordering(813) 00:14:19.267 fused_ordering(814) 00:14:19.267 fused_ordering(815) 00:14:19.267 fused_ordering(816) 00:14:19.267 fused_ordering(817) 00:14:19.267 fused_ordering(818) 00:14:19.267 fused_ordering(819) 00:14:19.267 fused_ordering(820) 00:14:19.834 fused_ordering(821) 00:14:19.834 fused_ordering(822) 00:14:19.834 fused_ordering(823) 00:14:19.834 fused_ordering(824) 00:14:19.834 fused_ordering(825) 00:14:19.834 fused_ordering(826) 00:14:19.834 fused_ordering(827) 00:14:19.834 fused_ordering(828) 00:14:19.834 fused_ordering(829) 00:14:19.834 fused_ordering(830) 00:14:19.834 fused_ordering(831) 00:14:19.834 fused_ordering(832) 00:14:19.834 fused_ordering(833) 00:14:19.834 fused_ordering(834) 00:14:19.834 fused_ordering(835) 00:14:19.834 fused_ordering(836) 00:14:19.834 fused_ordering(837) 00:14:19.834 fused_ordering(838) 00:14:19.834 fused_ordering(839) 00:14:19.834 fused_ordering(840) 00:14:19.834 fused_ordering(841) 00:14:19.834 fused_ordering(842) 00:14:19.834 fused_ordering(843) 00:14:19.834 fused_ordering(844) 00:14:19.834 fused_ordering(845) 00:14:19.834 fused_ordering(846) 00:14:19.834 fused_ordering(847) 00:14:19.834 fused_ordering(848) 00:14:19.834 fused_ordering(849) 00:14:19.834 fused_ordering(850) 00:14:19.834 fused_ordering(851) 00:14:19.834 fused_ordering(852) 00:14:19.834 fused_ordering(853) 00:14:19.834 fused_ordering(854) 00:14:19.834 fused_ordering(855) 00:14:19.834 fused_ordering(856) 00:14:19.834 fused_ordering(857) 00:14:19.834 fused_ordering(858) 00:14:19.834 fused_ordering(859) 00:14:19.834 fused_ordering(860) 00:14:19.834 fused_ordering(861) 00:14:19.834 fused_ordering(862) 00:14:19.834 fused_ordering(863) 00:14:19.834 fused_ordering(864) 00:14:19.834 fused_ordering(865) 00:14:19.834 fused_ordering(866) 00:14:19.834 fused_ordering(867) 00:14:19.834 fused_ordering(868) 00:14:19.834 fused_ordering(869) 00:14:19.834 fused_ordering(870) 00:14:19.834 fused_ordering(871) 00:14:19.834 fused_ordering(872) 00:14:19.834 fused_ordering(873) 00:14:19.834 fused_ordering(874) 00:14:19.834 fused_ordering(875) 00:14:19.834 fused_ordering(876) 00:14:19.834 fused_ordering(877) 00:14:19.834 fused_ordering(878) 00:14:19.834 fused_ordering(879) 00:14:19.834 fused_ordering(880) 00:14:19.834 fused_ordering(881) 00:14:19.834 fused_ordering(882) 00:14:19.834 fused_ordering(883) 00:14:19.834 fused_ordering(884) 00:14:19.834 fused_ordering(885) 00:14:19.834 fused_ordering(886) 00:14:19.834 fused_ordering(887) 00:14:19.834 fused_ordering(888) 00:14:19.834 fused_ordering(889) 00:14:19.834 fused_ordering(890) 00:14:19.834 fused_ordering(891) 00:14:19.834 fused_ordering(892) 00:14:19.834 fused_ordering(893) 00:14:19.834 fused_ordering(894) 00:14:19.834 fused_ordering(895) 00:14:19.834 fused_ordering(896) 00:14:19.834 fused_ordering(897) 00:14:19.835 fused_ordering(898) 00:14:19.835 fused_ordering(899) 00:14:19.835 fused_ordering(900) 00:14:19.835 fused_ordering(901) 00:14:19.835 fused_ordering(902) 00:14:19.835 fused_ordering(903) 00:14:19.835 fused_ordering(904) 00:14:19.835 fused_ordering(905) 00:14:19.835 fused_ordering(906) 00:14:19.835 fused_ordering(907) 00:14:19.835 fused_ordering(908) 00:14:19.835 fused_ordering(909) 00:14:19.835 fused_ordering(910) 00:14:19.835 fused_ordering(911) 00:14:19.835 fused_ordering(912) 00:14:19.835 fused_ordering(913) 00:14:19.835 fused_ordering(914) 00:14:19.835 fused_ordering(915) 00:14:19.835 fused_ordering(916) 00:14:19.835 fused_ordering(917) 00:14:19.835 fused_ordering(918) 00:14:19.835 fused_ordering(919) 00:14:19.835 fused_ordering(920) 00:14:19.835 fused_ordering(921) 00:14:19.835 fused_ordering(922) 00:14:19.835 fused_ordering(923) 00:14:19.835 fused_ordering(924) 00:14:19.835 fused_ordering(925) 00:14:19.835 fused_ordering(926) 00:14:19.835 fused_ordering(927) 00:14:19.835 fused_ordering(928) 00:14:19.835 fused_ordering(929) 00:14:19.835 fused_ordering(930) 00:14:19.835 fused_ordering(931) 00:14:19.835 fused_ordering(932) 00:14:19.835 fused_ordering(933) 00:14:19.835 fused_ordering(934) 00:14:19.835 fused_ordering(935) 00:14:19.835 fused_ordering(936) 00:14:19.835 fused_ordering(937) 00:14:19.835 fused_ordering(938) 00:14:19.835 fused_ordering(939) 00:14:19.835 fused_ordering(940) 00:14:19.835 fused_ordering(941) 00:14:19.835 fused_ordering(942) 00:14:19.835 fused_ordering(943) 00:14:19.835 fused_ordering(944) 00:14:19.835 fused_ordering(945) 00:14:19.835 fused_ordering(946) 00:14:19.835 fused_ordering(947) 00:14:19.835 fused_ordering(948) 00:14:19.835 fused_ordering(949) 00:14:19.835 fused_ordering(950) 00:14:19.835 fused_ordering(951) 00:14:19.835 fused_ordering(952) 00:14:19.835 fused_ordering(953) 00:14:19.835 fused_ordering(954) 00:14:19.835 fused_ordering(955) 00:14:19.835 fused_ordering(956) 00:14:19.835 fused_ordering(957) 00:14:19.835 fused_ordering(958) 00:14:19.835 fused_ordering(959) 00:14:19.835 fused_ordering(960) 00:14:19.835 fused_ordering(961) 00:14:19.835 fused_ordering(962) 00:14:19.835 fused_ordering(963) 00:14:19.835 fused_ordering(964) 00:14:19.835 fused_ordering(965) 00:14:19.835 fused_ordering(966) 00:14:19.835 fused_ordering(967) 00:14:19.835 fused_ordering(968) 00:14:19.835 fused_ordering(969) 00:14:19.835 fused_ordering(970) 00:14:19.835 fused_ordering(971) 00:14:19.835 fused_ordering(972) 00:14:19.835 fused_ordering(973) 00:14:19.835 fused_ordering(974) 00:14:19.835 fused_ordering(975) 00:14:19.835 fused_ordering(976) 00:14:19.835 fused_ordering(977) 00:14:19.835 fused_ordering(978) 00:14:19.835 fused_ordering(979) 00:14:19.835 fused_ordering(980) 00:14:19.835 fused_ordering(981) 00:14:19.835 fused_ordering(982) 00:14:19.835 fused_ordering(983) 00:14:19.835 fused_ordering(984) 00:14:19.835 fused_ordering(985) 00:14:19.835 fused_ordering(986) 00:14:19.835 fused_ordering(987) 00:14:19.835 fused_ordering(988) 00:14:19.835 fused_ordering(989) 00:14:19.835 fused_ordering(990) 00:14:19.835 fused_ordering(991) 00:14:19.835 fused_ordering(992) 00:14:19.835 fused_ordering(993) 00:14:19.835 fused_ordering(994) 00:14:19.835 fused_ordering(995) 00:14:19.835 fused_ordering(996) 00:14:19.835 fused_ordering(997) 00:14:19.835 fused_ordering(998) 00:14:19.835 fused_ordering(999) 00:14:19.835 fused_ordering(1000) 00:14:19.835 fused_ordering(1001) 00:14:19.835 fused_ordering(1002) 00:14:19.835 fused_ordering(1003) 00:14:19.835 fused_ordering(1004) 00:14:19.835 fused_ordering(1005) 00:14:19.835 fused_ordering(1006) 00:14:19.835 fused_ordering(1007) 00:14:19.835 fused_ordering(1008) 00:14:19.835 fused_ordering(1009) 00:14:19.835 fused_ordering(1010) 00:14:19.835 fused_ordering(1011) 00:14:19.835 fused_ordering(1012) 00:14:19.835 fused_ordering(1013) 00:14:19.835 fused_ordering(1014) 00:14:19.835 fused_ordering(1015) 00:14:19.835 fused_ordering(1016) 00:14:19.835 fused_ordering(1017) 00:14:19.835 fused_ordering(1018) 00:14:19.835 fused_ordering(1019) 00:14:19.835 fused_ordering(1020) 00:14:19.835 fused_ordering(1021) 00:14:19.835 fused_ordering(1022) 00:14:19.835 fused_ordering(1023) 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.835 rmmod nvme_tcp 00:14:19.835 rmmod nvme_fabrics 00:14:19.835 rmmod nvme_keyring 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1406109 ']' 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1406109 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 1406109 ']' 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 1406109 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1406109 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1406109' 00:14:19.835 killing process with pid 1406109 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 1406109 00:14:19.835 13:49:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 1406109 00:14:20.094 13:49:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.094 13:49:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.094 13:49:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.094 13:49:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.094 13:49:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.094 13:49:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.094 13:49:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.094 13:49:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.631 13:50:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:22.631 00:14:22.631 real 0m7.441s 00:14:22.631 user 0m5.082s 00:14:22.631 sys 0m3.145s 00:14:22.631 13:50:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:22.631 13:50:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 ************************************ 00:14:22.631 END TEST nvmf_fused_ordering 00:14:22.631 ************************************ 00:14:22.631 13:50:00 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:22.631 13:50:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:22.631 13:50:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:22.631 13:50:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 ************************************ 00:14:22.631 START TEST nvmf_delete_subsystem 00:14:22.631 ************************************ 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:22.631 * Looking for test storage... 00:14:22.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.631 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.632 13:50:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:24.537 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:24.537 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:24.537 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:24.537 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.537 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:14:24.538 00:14:24.538 --- 10.0.0.2 ping statistics --- 00:14:24.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.538 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:14:24.538 00:14:24.538 --- 10.0.0.1 ping statistics --- 00:14:24.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.538 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1408429 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1408429 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 1408429 ']' 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:24.538 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.538 [2024-07-14 13:50:02.404810] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:24.538 [2024-07-14 13:50:02.404911] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.538 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.538 [2024-07-14 13:50:02.474773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:24.796 [2024-07-14 13:50:02.564639] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.796 [2024-07-14 13:50:02.564705] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.796 [2024-07-14 13:50:02.564721] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.796 [2024-07-14 13:50:02.564734] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.796 [2024-07-14 13:50:02.564745] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.796 [2024-07-14 13:50:02.564828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.796 [2024-07-14 13:50:02.564834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.796 [2024-07-14 13:50:02.703447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.796 [2024-07-14 13:50:02.719661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.796 NULL1 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.796 Delay0 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1408474 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:24.796 13:50:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:24.796 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.053 [2024-07-14 13:50:02.794419] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:26.949 13:50:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.950 13:50:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.950 13:50:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 [2024-07-14 13:50:04.876227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa7e800c600 is same with the state(5) to be set 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 starting I/O failed: -6 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 [2024-07-14 13:50:04.877199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bce0 is same with the state(5) to be set 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:26.950 Write completed with error (sct=0, sc=8) 00:14:26.950 Read completed with error (sct=0, sc=8) 00:14:27.881 [2024-07-14 13:50:05.849613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2088620 is same with the state(5) to be set 00:14:28.138 Write completed with error (sct=0, sc=8) 00:14:28.138 Read completed with error (sct=0, sc=8) 00:14:28.138 Write completed with error (sct=0, sc=8) 00:14:28.138 Read completed with error (sct=0, sc=8) 00:14:28.138 Read completed with error (sct=0, sc=8) 00:14:28.138 Read completed with error (sct=0, sc=8) 00:14:28.138 Write completed with error (sct=0, sc=8) 00:14:28.138 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 [2024-07-14 13:50:05.878779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa7e800c2f0 is same with the state(5) to be set 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 [2024-07-14 13:50:05.879041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bb00 is same with the state(5) to be set 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 [2024-07-14 13:50:05.879532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2070d40 is same with the state(5) to be set 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Write completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 Read completed with error (sct=0, sc=8) 00:14:28.139 [2024-07-14 13:50:05.879766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206bec0 is same with the state(5) to be set 00:14:28.139 Initializing NVMe Controllers 00:14:28.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:28.139 Controller IO queue size 128, less than required. 00:14:28.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:28.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:28.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:28.139 Initialization complete. Launching workers. 00:14:28.139 ======================================================== 00:14:28.139 Latency(us) 00:14:28.139 Device Information : IOPS MiB/s Average min max 00:14:28.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 182.09 0.09 1016358.48 2186.93 2003775.54 00:14:28.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.35 0.07 926347.06 575.65 2002870.16 00:14:28.139 ======================================================== 00:14:28.139 Total : 330.44 0.16 975947.95 575.65 2003775.54 00:14:28.139 00:14:28.139 [2024-07-14 13:50:05.880583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2088620 (9): Bad file descriptor 00:14:28.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:28.139 13:50:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.139 13:50:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:28.139 13:50:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1408474 00:14:28.139 13:50:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1408474 00:14:28.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1408474) - No such process 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1408474 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1408474 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1408474 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.703 [2024-07-14 13:50:06.405195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1408979 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1408979 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:28.703 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:28.703 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.703 [2024-07-14 13:50:06.467248] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:28.960 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:28.961 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1408979 00:14:28.961 13:50:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.527 13:50:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.527 13:50:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1408979 00:14:29.527 13:50:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:30.090 13:50:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:30.090 13:50:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1408979 00:14:30.090 13:50:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:30.654 13:50:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:30.654 13:50:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1408979 00:14:30.654 13:50:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:31.218 13:50:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:31.218 13:50:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1408979 00:14:31.218 13:50:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:31.491 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:31.491 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1408979 00:14:31.491 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:31.763 Initializing NVMe Controllers 00:14:31.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:31.763 Controller IO queue size 128, less than required. 00:14:31.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:31.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:31.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:31.763 Initialization complete. Launching workers. 00:14:31.763 ======================================================== 00:14:31.763 Latency(us) 00:14:31.763 Device Information : IOPS MiB/s Average min max 00:14:31.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004516.30 1000157.73 1042726.00 00:14:31.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004895.01 1000163.03 1041115.78 00:14:31.763 ======================================================== 00:14:31.763 Total : 256.00 0.12 1004705.65 1000157.73 1042726.00 00:14:31.763 00:14:32.020 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:32.020 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1408979 00:14:32.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1408979) - No such process 00:14:32.020 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1408979 00:14:32.020 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:32.020 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:32.020 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.020 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:32.020 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.020 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:32.020 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.021 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.021 rmmod nvme_tcp 00:14:32.021 rmmod nvme_fabrics 00:14:32.021 rmmod nvme_keyring 00:14:32.021 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.021 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:32.021 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:32.021 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1408429 ']' 00:14:32.021 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1408429 00:14:32.021 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 1408429 ']' 00:14:32.021 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 1408429 00:14:32.021 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:32.021 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:32.021 13:50:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1408429 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1408429' 00:14:32.279 killing process with pid 1408429 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 1408429 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 1408429 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.279 13:50:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.809 13:50:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:34.809 00:14:34.809 real 0m12.167s 00:14:34.809 user 0m27.601s 00:14:34.809 sys 0m2.918s 00:14:34.809 13:50:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:34.809 13:50:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:34.809 ************************************ 00:14:34.809 END TEST nvmf_delete_subsystem 00:14:34.809 ************************************ 00:14:34.809 13:50:12 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:34.809 13:50:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:34.809 13:50:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:34.809 13:50:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:34.809 ************************************ 00:14:34.809 START TEST nvmf_ns_masking 00:14:34.809 ************************************ 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:34.809 * Looking for test storage... 00:14:34.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.809 13:50:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=5f4ea58e-4f64-4a8e-9309-8b8e296b20e1 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:34.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:36.714 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:36.714 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:36.714 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:36.714 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:36.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:14:36.714 00:14:36.714 --- 10.0.0.2 ping statistics --- 00:14:36.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.714 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:14:36.714 00:14:36.714 --- 10.0.0.1 ping statistics --- 00:14:36.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.714 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:36.714 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1411318 00:14:36.715 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:36.715 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1411318 00:14:36.715 13:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 1411318 ']' 00:14:36.715 13:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.715 13:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:36.715 13:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.715 13:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:36.715 13:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:36.715 [2024-07-14 13:50:14.555659] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:36.715 [2024-07-14 13:50:14.555737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.715 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.715 [2024-07-14 13:50:14.630697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.973 [2024-07-14 13:50:14.722785] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.973 [2024-07-14 13:50:14.722842] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.973 [2024-07-14 13:50:14.722857] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.973 [2024-07-14 13:50:14.722870] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.973 [2024-07-14 13:50:14.722893] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.973 [2024-07-14 13:50:14.722957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.973 [2024-07-14 13:50:14.723090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.973 [2024-07-14 13:50:14.723207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.973 [2024-07-14 13:50:14.723210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.539 13:50:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:37.539 13:50:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:37.539 13:50:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:37.539 13:50:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:37.539 13:50:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.796 13:50:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.796 13:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:37.796 [2024-07-14 13:50:15.759476] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.054 13:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:38.054 13:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:38.054 13:50:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:38.054 Malloc1 00:14:38.311 13:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:38.311 Malloc2 00:14:38.586 13:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:38.586 13:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:38.843 13:50:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.100 [2024-07-14 13:50:17.011732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.100 13:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:39.100 13:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5f4ea58e-4f64-4a8e-9309-8b8e296b20e1 -a 10.0.0.2 -s 4420 -i 4 00:14:39.357 13:50:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:39.357 13:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:39.357 13:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.357 13:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:39.357 13:50:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:41.254 [ 0]:0x1 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:41.254 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:41.512 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2b77c3f08cd14c24bb9779c9bad71f10 00:14:41.512 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2b77c3f08cd14c24bb9779c9bad71f10 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.512 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:41.769 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:41.769 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:41.769 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:41.769 [ 0]:0x1 00:14:41.769 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:41.769 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:41.769 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2b77c3f08cd14c24bb9779c9bad71f10 00:14:41.769 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2b77c3f08cd14c24bb9779c9bad71f10 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.770 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:41.770 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:41.770 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:41.770 [ 1]:0x2 00:14:41.770 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:41.770 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:41.770 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ef047d4c7f4f47519aaadbee482fdc72 00:14:41.770 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ef047d4c7f4f47519aaadbee482fdc72 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:41.770 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:41.770 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.770 13:50:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.028 13:50:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:42.286 13:50:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:42.286 13:50:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5f4ea58e-4f64-4a8e-9309-8b8e296b20e1 -a 10.0.0.2 -s 4420 -i 4 00:14:42.544 13:50:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:42.544 13:50:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:42.544 13:50:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.544 13:50:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:42.544 13:50:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:42.544 13:50:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:44.442 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:44.442 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:44.442 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.442 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:44.442 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.442 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:44.442 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:44.442 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:44.698 [ 0]:0x2 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ef047d4c7f4f47519aaadbee482fdc72 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ef047d4c7f4f47519aaadbee482fdc72 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:44.698 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:44.955 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:44.955 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:44.955 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:44.955 [ 0]:0x1 00:14:44.955 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:44.955 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:45.212 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2b77c3f08cd14c24bb9779c9bad71f10 00:14:45.212 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2b77c3f08cd14c24bb9779c9bad71f10 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.212 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:45.212 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:45.212 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:45.212 [ 1]:0x2 00:14:45.212 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.212 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:45.212 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ef047d4c7f4f47519aaadbee482fdc72 00:14:45.212 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ef047d4c7f4f47519aaadbee482fdc72 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.212 13:50:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:45.470 [ 0]:0x2 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ef047d4c7f4f47519aaadbee482fdc72 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ef047d4c7f4f47519aaadbee482fdc72 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.470 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:45.728 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:45.728 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5f4ea58e-4f64-4a8e-9309-8b8e296b20e1 -a 10.0.0.2 -s 4420 -i 4 00:14:45.985 13:50:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:45.985 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:45.985 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.985 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:45.985 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:45.985 13:50:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:48.509 13:50:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:48.509 13:50:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:48.509 13:50:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.509 13:50:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:48.509 13:50:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.509 13:50:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:48.509 13:50:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:48.509 13:50:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:48.509 [ 0]:0x1 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2b77c3f08cd14c24bb9779c9bad71f10 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2b77c3f08cd14c24bb9779c9bad71f10 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:48.509 [ 1]:0x2 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ef047d4c7f4f47519aaadbee482fdc72 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ef047d4c7f4f47519aaadbee482fdc72 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.509 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:48.766 [ 0]:0x2 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ef047d4c7f4f47519aaadbee482fdc72 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ef047d4c7f4f47519aaadbee482fdc72 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.766 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.767 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.767 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:48.767 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:49.024 [2024-07-14 13:50:26.928023] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:49.024 request: 00:14:49.024 { 00:14:49.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.024 "nsid": 2, 00:14:49.024 "host": "nqn.2016-06.io.spdk:host1", 00:14:49.024 "method": "nvmf_ns_remove_host", 00:14:49.024 "req_id": 1 00:14:49.024 } 00:14:49.024 Got JSON-RPC error response 00:14:49.024 response: 00:14:49.024 { 00:14:49.024 "code": -32602, 00:14:49.024 "message": "Invalid parameters" 00:14:49.024 } 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:49.024 13:50:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:49.024 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:49.024 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.024 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:49.024 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.024 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.024 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.024 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:49.024 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:49.024 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:49.282 [ 0]:0x2 00:14:49.282 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:49.282 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:49.282 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ef047d4c7f4f47519aaadbee482fdc72 00:14:49.282 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ef047d4c7f4f47519aaadbee482fdc72 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.282 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:49.282 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.282 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.540 rmmod nvme_tcp 00:14:49.540 rmmod nvme_fabrics 00:14:49.540 rmmod nvme_keyring 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1411318 ']' 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1411318 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 1411318 ']' 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 1411318 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:49.540 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1411318 00:14:49.798 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:49.798 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:49.798 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1411318' 00:14:49.798 killing process with pid 1411318 00:14:49.798 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 1411318 00:14:49.798 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 1411318 00:14:50.057 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:50.057 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:50.057 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:50.057 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.057 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.057 13:50:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.057 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.057 13:50:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.016 13:50:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:52.016 00:14:52.016 real 0m17.505s 00:14:52.016 user 0m55.656s 00:14:52.016 sys 0m3.882s 00:14:52.016 13:50:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:52.016 13:50:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:52.016 ************************************ 00:14:52.016 END TEST nvmf_ns_masking 00:14:52.016 ************************************ 00:14:52.016 13:50:29 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:52.016 13:50:29 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:52.016 13:50:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:52.016 13:50:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:52.016 13:50:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.016 ************************************ 00:14:52.016 START TEST nvmf_nvme_cli 00:14:52.016 ************************************ 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:52.016 * Looking for test storage... 00:14:52.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.016 13:50:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:52.017 13:50:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:54.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:54.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.542 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:54.543 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:54.543 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.543 13:50:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:54.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:14:54.543 00:14:54.543 --- 10.0.0.2 ping statistics --- 00:14:54.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.543 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:14:54.543 00:14:54.543 --- 10.0.0.1 ping statistics --- 00:14:54.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.543 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1414885 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1414885 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 1414885 ']' 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.543 [2024-07-14 13:50:32.136629] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:54.543 [2024-07-14 13:50:32.136709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.543 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.543 [2024-07-14 13:50:32.210964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.543 [2024-07-14 13:50:32.302156] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.543 [2024-07-14 13:50:32.302236] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.543 [2024-07-14 13:50:32.302252] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.543 [2024-07-14 13:50:32.302266] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.543 [2024-07-14 13:50:32.302278] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.543 [2024-07-14 13:50:32.302367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.543 [2024-07-14 13:50:32.302435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.543 [2024-07-14 13:50:32.302539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.543 [2024-07-14 13:50:32.302542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.543 [2024-07-14 13:50:32.455715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.543 Malloc0 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.543 Malloc1 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.543 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.800 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.800 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:54.800 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.800 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.801 [2024-07-14 13:50:32.538994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:54.801 00:14:54.801 Discovery Log Number of Records 2, Generation counter 2 00:14:54.801 =====Discovery Log Entry 0====== 00:14:54.801 trtype: tcp 00:14:54.801 adrfam: ipv4 00:14:54.801 subtype: current discovery subsystem 00:14:54.801 treq: not required 00:14:54.801 portid: 0 00:14:54.801 trsvcid: 4420 00:14:54.801 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:54.801 traddr: 10.0.0.2 00:14:54.801 eflags: explicit discovery connections, duplicate discovery information 00:14:54.801 sectype: none 00:14:54.801 =====Discovery Log Entry 1====== 00:14:54.801 trtype: tcp 00:14:54.801 adrfam: ipv4 00:14:54.801 subtype: nvme subsystem 00:14:54.801 treq: not required 00:14:54.801 portid: 0 00:14:54.801 trsvcid: 4420 00:14:54.801 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:54.801 traddr: 10.0.0.2 00:14:54.801 eflags: none 00:14:54.801 sectype: none 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:54.801 13:50:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:55.365 13:50:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:55.365 13:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:55.365 13:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.365 13:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:55.365 13:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:55.365 13:50:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:57.890 /dev/nvme0n1 ]] 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:57.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.890 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.890 rmmod nvme_tcp 00:14:58.148 rmmod nvme_fabrics 00:14:58.148 rmmod nvme_keyring 00:14:58.148 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:58.148 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1414885 ']' 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1414885 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 1414885 ']' 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 1414885 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1414885 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1414885' 00:14:58.149 killing process with pid 1414885 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 1414885 00:14:58.149 13:50:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 1414885 00:14:58.408 13:50:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:58.408 13:50:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:58.408 13:50:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:58.408 13:50:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.408 13:50:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:58.408 13:50:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.408 13:50:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.408 13:50:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.310 13:50:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:00.310 00:15:00.310 real 0m8.325s 00:15:00.310 user 0m15.960s 00:15:00.310 sys 0m2.147s 00:15:00.310 13:50:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:00.310 13:50:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:00.310 ************************************ 00:15:00.310 END TEST nvmf_nvme_cli 00:15:00.310 ************************************ 00:15:00.310 13:50:38 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:00.310 13:50:38 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:00.310 13:50:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:00.310 13:50:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:00.310 13:50:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.310 ************************************ 00:15:00.310 START TEST nvmf_vfio_user 00:15:00.310 ************************************ 00:15:00.310 13:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:00.570 * Looking for test storage... 00:15:00.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1415810 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1415810' 00:15:00.570 Process pid: 1415810 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1415810 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1415810 ']' 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:00.570 13:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:00.570 [2024-07-14 13:50:38.402464] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:00.570 [2024-07-14 13:50:38.402544] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.570 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.570 [2024-07-14 13:50:38.468047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.828 [2024-07-14 13:50:38.560053] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.828 [2024-07-14 13:50:38.560105] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.828 [2024-07-14 13:50:38.560122] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.828 [2024-07-14 13:50:38.560136] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.828 [2024-07-14 13:50:38.560155] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.828 [2024-07-14 13:50:38.560245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.828 [2024-07-14 13:50:38.560320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.828 [2024-07-14 13:50:38.560352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.828 [2024-07-14 13:50:38.560355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.828 13:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:00.828 13:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:00.828 13:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:01.758 13:50:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:02.015 13:50:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:02.015 13:50:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:02.015 13:50:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:02.015 13:50:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:02.015 13:50:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:02.587 Malloc1 00:15:02.587 13:50:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:02.587 13:50:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:02.846 13:50:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:03.102 13:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:03.103 13:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:03.103 13:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:03.360 Malloc2 00:15:03.360 13:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:03.617 13:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:03.875 13:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:04.133 13:50:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:04.133 13:50:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:04.133 13:50:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:04.133 13:50:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:04.133 13:50:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:04.133 13:50:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:04.133 [2024-07-14 13:50:42.072040] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:04.133 [2024-07-14 13:50:42.072082] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416229 ] 00:15:04.133 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.133 [2024-07-14 13:50:42.107037] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:04.133 [2024-07-14 13:50:42.113359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:04.133 [2024-07-14 13:50:42.113389] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9da2851000 00:15:04.393 [2024-07-14 13:50:42.114358] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.393 [2024-07-14 13:50:42.115360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.393 [2024-07-14 13:50:42.116378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.393 [2024-07-14 13:50:42.117366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:04.393 [2024-07-14 13:50:42.118372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:04.393 [2024-07-14 13:50:42.119378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.393 [2024-07-14 13:50:42.120381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:04.393 [2024-07-14 13:50:42.121389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:04.393 [2024-07-14 13:50:42.122394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:04.393 [2024-07-14 13:50:42.122417] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9da1603000 00:15:04.393 [2024-07-14 13:50:42.123532] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:04.393 [2024-07-14 13:50:42.137526] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:04.393 [2024-07-14 13:50:42.137561] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:04.393 [2024-07-14 13:50:42.146538] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:04.393 [2024-07-14 13:50:42.146588] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:04.393 [2024-07-14 13:50:42.146676] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:04.393 [2024-07-14 13:50:42.146704] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:04.393 [2024-07-14 13:50:42.146714] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:04.393 [2024-07-14 13:50:42.147527] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:04.393 [2024-07-14 13:50:42.147550] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:04.393 [2024-07-14 13:50:42.147564] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:04.393 [2024-07-14 13:50:42.148531] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:04.393 [2024-07-14 13:50:42.148735] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:04.393 [2024-07-14 13:50:42.148752] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:04.393 [2024-07-14 13:50:42.149537] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:04.393 [2024-07-14 13:50:42.149554] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:04.393 [2024-07-14 13:50:42.150542] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:04.393 [2024-07-14 13:50:42.150560] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:04.393 [2024-07-14 13:50:42.150569] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:04.393 [2024-07-14 13:50:42.150580] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:04.393 [2024-07-14 13:50:42.150689] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:04.393 [2024-07-14 13:50:42.150697] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:04.393 [2024-07-14 13:50:42.150706] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:04.393 [2024-07-14 13:50:42.151555] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:04.393 [2024-07-14 13:50:42.152552] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:04.393 [2024-07-14 13:50:42.153553] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:04.393 [2024-07-14 13:50:42.154549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.393 [2024-07-14 13:50:42.154681] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:04.393 [2024-07-14 13:50:42.155560] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:04.393 [2024-07-14 13:50:42.155578] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:04.393 [2024-07-14 13:50:42.155586] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:04.393 [2024-07-14 13:50:42.155610] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:04.393 [2024-07-14 13:50:42.155623] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:04.393 [2024-07-14 13:50:42.155649] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:04.393 [2024-07-14 13:50:42.155659] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.393 [2024-07-14 13:50:42.155677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.393 [2024-07-14 13:50:42.155745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:04.393 [2024-07-14 13:50:42.155765] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:04.393 [2024-07-14 13:50:42.155773] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:04.393 [2024-07-14 13:50:42.155781] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:04.393 [2024-07-14 13:50:42.155788] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:04.393 [2024-07-14 13:50:42.155795] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:04.393 [2024-07-14 13:50:42.155802] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:04.393 [2024-07-14 13:50:42.155810] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:04.393 [2024-07-14 13:50:42.155822] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:04.393 [2024-07-14 13:50:42.155837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:04.393 [2024-07-14 13:50:42.155850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:04.393 [2024-07-14 13:50:42.155889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.393 [2024-07-14 13:50:42.155904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.393 [2024-07-14 13:50:42.155916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.393 [2024-07-14 13:50:42.155928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:04.393 [2024-07-14 13:50:42.155936] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:04.393 [2024-07-14 13:50:42.155952] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:04.393 [2024-07-14 13:50:42.155967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:04.393 [2024-07-14 13:50:42.155979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.155989] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:04.394 [2024-07-14 13:50:42.155997] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156008] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156024] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:04.394 [2024-07-14 13:50:42.156049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.156119] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156135] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156149] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:04.394 [2024-07-14 13:50:42.156157] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:04.394 [2024-07-14 13:50:42.156188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:04.394 [2024-07-14 13:50:42.156203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.156218] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:04.394 [2024-07-14 13:50:42.156251] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156266] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156277] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:04.394 [2024-07-14 13:50:42.156285] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.394 [2024-07-14 13:50:42.156294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.394 [2024-07-14 13:50:42.156314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.156334] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156347] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156359] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:04.394 [2024-07-14 13:50:42.156366] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.394 [2024-07-14 13:50:42.156375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.394 [2024-07-14 13:50:42.156389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.156402] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156413] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156426] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156436] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156444] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156452] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:04.394 [2024-07-14 13:50:42.156459] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:04.394 [2024-07-14 13:50:42.156471] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:04.394 [2024-07-14 13:50:42.156500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:04.394 [2024-07-14 13:50:42.156519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.156537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:04.394 [2024-07-14 13:50:42.156548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.156563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:04.394 [2024-07-14 13:50:42.156574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.156589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:04.394 [2024-07-14 13:50:42.156599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.156616] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:04.394 [2024-07-14 13:50:42.156625] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:04.394 [2024-07-14 13:50:42.156631] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:04.394 [2024-07-14 13:50:42.156637] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:04.394 [2024-07-14 13:50:42.156646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:04.394 [2024-07-14 13:50:42.156657] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:04.394 [2024-07-14 13:50:42.156665] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:04.394 [2024-07-14 13:50:42.156673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:04.394 [2024-07-14 13:50:42.156684] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:04.394 [2024-07-14 13:50:42.156691] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:04.394 [2024-07-14 13:50:42.156700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:04.394 [2024-07-14 13:50:42.156711] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:04.394 [2024-07-14 13:50:42.156719] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:04.394 [2024-07-14 13:50:42.156727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:04.394 [2024-07-14 13:50:42.156738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.156757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.156772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:04.394 [2024-07-14 13:50:42.156786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:04.394 ===================================================== 00:15:04.394 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:04.394 ===================================================== 00:15:04.394 Controller Capabilities/Features 00:15:04.394 ================================ 00:15:04.394 Vendor ID: 4e58 00:15:04.394 Subsystem Vendor ID: 4e58 00:15:04.394 Serial Number: SPDK1 00:15:04.394 Model Number: SPDK bdev Controller 00:15:04.394 Firmware Version: 24.05.1 00:15:04.394 Recommended Arb Burst: 6 00:15:04.394 IEEE OUI Identifier: 8d 6b 50 00:15:04.394 Multi-path I/O 00:15:04.394 May have multiple subsystem ports: Yes 00:15:04.394 May have multiple controllers: Yes 00:15:04.394 Associated with SR-IOV VF: No 00:15:04.394 Max Data Transfer Size: 131072 00:15:04.394 Max Number of Namespaces: 32 00:15:04.394 Max Number of I/O Queues: 127 00:15:04.394 NVMe Specification Version (VS): 1.3 00:15:04.394 NVMe Specification Version (Identify): 1.3 00:15:04.394 Maximum Queue Entries: 256 00:15:04.394 Contiguous Queues Required: Yes 00:15:04.394 Arbitration Mechanisms Supported 00:15:04.394 Weighted Round Robin: Not Supported 00:15:04.394 Vendor Specific: Not Supported 00:15:04.394 Reset Timeout: 15000 ms 00:15:04.394 Doorbell Stride: 4 bytes 00:15:04.394 NVM Subsystem Reset: Not Supported 00:15:04.394 Command Sets Supported 00:15:04.394 NVM Command Set: Supported 00:15:04.394 Boot Partition: Not Supported 00:15:04.394 Memory Page Size Minimum: 4096 bytes 00:15:04.394 Memory Page Size Maximum: 4096 bytes 00:15:04.394 Persistent Memory Region: Not Supported 00:15:04.394 Optional Asynchronous Events Supported 00:15:04.394 Namespace Attribute Notices: Supported 00:15:04.394 Firmware Activation Notices: Not Supported 00:15:04.394 ANA Change Notices: Not Supported 00:15:04.394 PLE Aggregate Log Change Notices: Not Supported 00:15:04.394 LBA Status Info Alert Notices: Not Supported 00:15:04.394 EGE Aggregate Log Change Notices: Not Supported 00:15:04.394 Normal NVM Subsystem Shutdown event: Not Supported 00:15:04.394 Zone Descriptor Change Notices: Not Supported 00:15:04.394 Discovery Log Change Notices: Not Supported 00:15:04.394 Controller Attributes 00:15:04.394 128-bit Host Identifier: Supported 00:15:04.394 Non-Operational Permissive Mode: Not Supported 00:15:04.394 NVM Sets: Not Supported 00:15:04.394 Read Recovery Levels: Not Supported 00:15:04.394 Endurance Groups: Not Supported 00:15:04.394 Predictable Latency Mode: Not Supported 00:15:04.394 Traffic Based Keep ALive: Not Supported 00:15:04.394 Namespace Granularity: Not Supported 00:15:04.394 SQ Associations: Not Supported 00:15:04.394 UUID List: Not Supported 00:15:04.394 Multi-Domain Subsystem: Not Supported 00:15:04.394 Fixed Capacity Management: Not Supported 00:15:04.394 Variable Capacity Management: Not Supported 00:15:04.394 Delete Endurance Group: Not Supported 00:15:04.394 Delete NVM Set: Not Supported 00:15:04.394 Extended LBA Formats Supported: Not Supported 00:15:04.394 Flexible Data Placement Supported: Not Supported 00:15:04.394 00:15:04.394 Controller Memory Buffer Support 00:15:04.394 ================================ 00:15:04.394 Supported: No 00:15:04.394 00:15:04.394 Persistent Memory Region Support 00:15:04.394 ================================ 00:15:04.395 Supported: No 00:15:04.395 00:15:04.395 Admin Command Set Attributes 00:15:04.395 ============================ 00:15:04.395 Security Send/Receive: Not Supported 00:15:04.395 Format NVM: Not Supported 00:15:04.395 Firmware Activate/Download: Not Supported 00:15:04.395 Namespace Management: Not Supported 00:15:04.395 Device Self-Test: Not Supported 00:15:04.395 Directives: Not Supported 00:15:04.395 NVMe-MI: Not Supported 00:15:04.395 Virtualization Management: Not Supported 00:15:04.395 Doorbell Buffer Config: Not Supported 00:15:04.395 Get LBA Status Capability: Not Supported 00:15:04.395 Command & Feature Lockdown Capability: Not Supported 00:15:04.395 Abort Command Limit: 4 00:15:04.395 Async Event Request Limit: 4 00:15:04.395 Number of Firmware Slots: N/A 00:15:04.395 Firmware Slot 1 Read-Only: N/A 00:15:04.395 Firmware Activation Without Reset: N/A 00:15:04.395 Multiple Update Detection Support: N/A 00:15:04.395 Firmware Update Granularity: No Information Provided 00:15:04.395 Per-Namespace SMART Log: No 00:15:04.395 Asymmetric Namespace Access Log Page: Not Supported 00:15:04.395 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:04.395 Command Effects Log Page: Supported 00:15:04.395 Get Log Page Extended Data: Supported 00:15:04.395 Telemetry Log Pages: Not Supported 00:15:04.395 Persistent Event Log Pages: Not Supported 00:15:04.395 Supported Log Pages Log Page: May Support 00:15:04.395 Commands Supported & Effects Log Page: Not Supported 00:15:04.395 Feature Identifiers & Effects Log Page:May Support 00:15:04.395 NVMe-MI Commands & Effects Log Page: May Support 00:15:04.395 Data Area 4 for Telemetry Log: Not Supported 00:15:04.395 Error Log Page Entries Supported: 128 00:15:04.395 Keep Alive: Supported 00:15:04.395 Keep Alive Granularity: 10000 ms 00:15:04.395 00:15:04.395 NVM Command Set Attributes 00:15:04.395 ========================== 00:15:04.395 Submission Queue Entry Size 00:15:04.395 Max: 64 00:15:04.395 Min: 64 00:15:04.395 Completion Queue Entry Size 00:15:04.395 Max: 16 00:15:04.395 Min: 16 00:15:04.395 Number of Namespaces: 32 00:15:04.395 Compare Command: Supported 00:15:04.395 Write Uncorrectable Command: Not Supported 00:15:04.395 Dataset Management Command: Supported 00:15:04.395 Write Zeroes Command: Supported 00:15:04.395 Set Features Save Field: Not Supported 00:15:04.395 Reservations: Not Supported 00:15:04.395 Timestamp: Not Supported 00:15:04.395 Copy: Supported 00:15:04.395 Volatile Write Cache: Present 00:15:04.395 Atomic Write Unit (Normal): 1 00:15:04.395 Atomic Write Unit (PFail): 1 00:15:04.395 Atomic Compare & Write Unit: 1 00:15:04.395 Fused Compare & Write: Supported 00:15:04.395 Scatter-Gather List 00:15:04.395 SGL Command Set: Supported (Dword aligned) 00:15:04.395 SGL Keyed: Not Supported 00:15:04.395 SGL Bit Bucket Descriptor: Not Supported 00:15:04.395 SGL Metadata Pointer: Not Supported 00:15:04.395 Oversized SGL: Not Supported 00:15:04.395 SGL Metadata Address: Not Supported 00:15:04.395 SGL Offset: Not Supported 00:15:04.395 Transport SGL Data Block: Not Supported 00:15:04.395 Replay Protected Memory Block: Not Supported 00:15:04.395 00:15:04.395 Firmware Slot Information 00:15:04.395 ========================= 00:15:04.395 Active slot: 1 00:15:04.395 Slot 1 Firmware Revision: 24.05.1 00:15:04.395 00:15:04.395 00:15:04.395 Commands Supported and Effects 00:15:04.395 ============================== 00:15:04.395 Admin Commands 00:15:04.395 -------------- 00:15:04.395 Get Log Page (02h): Supported 00:15:04.395 Identify (06h): Supported 00:15:04.395 Abort (08h): Supported 00:15:04.395 Set Features (09h): Supported 00:15:04.395 Get Features (0Ah): Supported 00:15:04.395 Asynchronous Event Request (0Ch): Supported 00:15:04.395 Keep Alive (18h): Supported 00:15:04.395 I/O Commands 00:15:04.395 ------------ 00:15:04.395 Flush (00h): Supported LBA-Change 00:15:04.395 Write (01h): Supported LBA-Change 00:15:04.395 Read (02h): Supported 00:15:04.395 Compare (05h): Supported 00:15:04.395 Write Zeroes (08h): Supported LBA-Change 00:15:04.395 Dataset Management (09h): Supported LBA-Change 00:15:04.395 Copy (19h): Supported LBA-Change 00:15:04.395 Unknown (79h): Supported LBA-Change 00:15:04.395 Unknown (7Ah): Supported 00:15:04.395 00:15:04.395 Error Log 00:15:04.395 ========= 00:15:04.395 00:15:04.395 Arbitration 00:15:04.395 =========== 00:15:04.395 Arbitration Burst: 1 00:15:04.395 00:15:04.395 Power Management 00:15:04.395 ================ 00:15:04.395 Number of Power States: 1 00:15:04.395 Current Power State: Power State #0 00:15:04.395 Power State #0: 00:15:04.395 Max Power: 0.00 W 00:15:04.395 Non-Operational State: Operational 00:15:04.395 Entry Latency: Not Reported 00:15:04.395 Exit Latency: Not Reported 00:15:04.395 Relative Read Throughput: 0 00:15:04.395 Relative Read Latency: 0 00:15:04.395 Relative Write Throughput: 0 00:15:04.395 Relative Write Latency: 0 00:15:04.395 Idle Power: Not Reported 00:15:04.395 Active Power: Not Reported 00:15:04.395 Non-Operational Permissive Mode: Not Supported 00:15:04.395 00:15:04.395 Health Information 00:15:04.395 ================== 00:15:04.395 Critical Warnings: 00:15:04.395 Available Spare Space: OK 00:15:04.395 Temperature: OK 00:15:04.395 Device Reliability: OK 00:15:04.395 Read Only: No 00:15:04.395 Volatile Memory Backup: OK 00:15:04.395 Current Temperature: 0 Kelvin[2024-07-14 13:50:42.156953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:04.395 [2024-07-14 13:50:42.156971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:04.395 [2024-07-14 13:50:42.157010] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:04.395 [2024-07-14 13:50:42.157028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-07-14 13:50:42.157039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-07-14 13:50:42.157049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-07-14 13:50:42.157059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:04.395 [2024-07-14 13:50:42.157571] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:04.395 [2024-07-14 13:50:42.157590] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:04.395 [2024-07-14 13:50:42.158573] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.395 [2024-07-14 13:50:42.158659] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:04.395 [2024-07-14 13:50:42.158673] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:04.395 [2024-07-14 13:50:42.159579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:04.395 [2024-07-14 13:50:42.159600] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:04.395 [2024-07-14 13:50:42.159652] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:04.395 [2024-07-14 13:50:42.161640] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:04.395 (-273 Celsius) 00:15:04.395 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:04.395 Available Spare: 0% 00:15:04.395 Available Spare Threshold: 0% 00:15:04.395 Life Percentage Used: 0% 00:15:04.395 Data Units Read: 0 00:15:04.395 Data Units Written: 0 00:15:04.395 Host Read Commands: 0 00:15:04.395 Host Write Commands: 0 00:15:04.395 Controller Busy Time: 0 minutes 00:15:04.395 Power Cycles: 0 00:15:04.395 Power On Hours: 0 hours 00:15:04.395 Unsafe Shutdowns: 0 00:15:04.395 Unrecoverable Media Errors: 0 00:15:04.395 Lifetime Error Log Entries: 0 00:15:04.395 Warning Temperature Time: 0 minutes 00:15:04.395 Critical Temperature Time: 0 minutes 00:15:04.395 00:15:04.395 Number of Queues 00:15:04.395 ================ 00:15:04.395 Number of I/O Submission Queues: 127 00:15:04.395 Number of I/O Completion Queues: 127 00:15:04.395 00:15:04.395 Active Namespaces 00:15:04.395 ================= 00:15:04.395 Namespace ID:1 00:15:04.395 Error Recovery Timeout: Unlimited 00:15:04.395 Command Set Identifier: NVM (00h) 00:15:04.395 Deallocate: Supported 00:15:04.395 Deallocated/Unwritten Error: Not Supported 00:15:04.395 Deallocated Read Value: Unknown 00:15:04.395 Deallocate in Write Zeroes: Not Supported 00:15:04.395 Deallocated Guard Field: 0xFFFF 00:15:04.395 Flush: Supported 00:15:04.395 Reservation: Supported 00:15:04.395 Namespace Sharing Capabilities: Multiple Controllers 00:15:04.395 Size (in LBAs): 131072 (0GiB) 00:15:04.395 Capacity (in LBAs): 131072 (0GiB) 00:15:04.395 Utilization (in LBAs): 131072 (0GiB) 00:15:04.395 NGUID: 2D2CF0532364490B8CA02A82C181FA5D 00:15:04.395 UUID: 2d2cf053-2364-490b-8ca0-2a82c181fa5d 00:15:04.395 Thin Provisioning: Not Supported 00:15:04.395 Per-NS Atomic Units: Yes 00:15:04.395 Atomic Boundary Size (Normal): 0 00:15:04.395 Atomic Boundary Size (PFail): 0 00:15:04.395 Atomic Boundary Offset: 0 00:15:04.395 Maximum Single Source Range Length: 65535 00:15:04.395 Maximum Copy Length: 65535 00:15:04.395 Maximum Source Range Count: 1 00:15:04.395 NGUID/EUI64 Never Reused: No 00:15:04.395 Namespace Write Protected: No 00:15:04.395 Number of LBA Formats: 1 00:15:04.395 Current LBA Format: LBA Format #00 00:15:04.396 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:04.396 00:15:04.396 13:50:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:04.396 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.654 [2024-07-14 13:50:42.392752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.916 Initializing NVMe Controllers 00:15:09.916 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:09.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:09.916 Initialization complete. Launching workers. 00:15:09.916 ======================================================== 00:15:09.916 Latency(us) 00:15:09.916 Device Information : IOPS MiB/s Average min max 00:15:09.916 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35494.96 138.65 3607.35 1164.80 7334.74 00:15:09.916 ======================================================== 00:15:09.916 Total : 35494.96 138.65 3607.35 1164.80 7334.74 00:15:09.916 00:15:09.916 [2024-07-14 13:50:47.417171] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.916 13:50:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:09.916 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.916 [2024-07-14 13:50:47.651347] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.214 Initializing NVMe Controllers 00:15:15.214 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:15.214 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:15.214 Initialization complete. Launching workers. 00:15:15.214 ======================================================== 00:15:15.214 Latency(us) 00:15:15.214 Device Information : IOPS MiB/s Average min max 00:15:15.214 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15982.00 62.43 8019.06 5998.87 15976.65 00:15:15.214 ======================================================== 00:15:15.214 Total : 15982.00 62.43 8019.06 5998.87 15976.65 00:15:15.214 00:15:15.214 [2024-07-14 13:50:52.689090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.214 13:50:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:15.214 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.214 [2024-07-14 13:50:52.900157] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:20.497 [2024-07-14 13:50:57.974223] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:20.497 Initializing NVMe Controllers 00:15:20.497 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:20.497 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:20.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:20.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:20.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:20.497 Initialization complete. Launching workers. 00:15:20.497 Starting thread on core 2 00:15:20.497 Starting thread on core 3 00:15:20.497 Starting thread on core 1 00:15:20.497 13:50:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:20.497 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.497 [2024-07-14 13:50:58.264930] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:23.777 [2024-07-14 13:51:01.333576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.777 Initializing NVMe Controllers 00:15:23.777 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.777 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.778 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:23.778 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:23.778 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:23.778 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:23.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:23.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:23.778 Initialization complete. Launching workers. 00:15:23.778 Starting thread on core 1 with urgent priority queue 00:15:23.778 Starting thread on core 2 with urgent priority queue 00:15:23.778 Starting thread on core 3 with urgent priority queue 00:15:23.778 Starting thread on core 0 with urgent priority queue 00:15:23.778 SPDK bdev Controller (SPDK1 ) core 0: 5496.67 IO/s 18.19 secs/100000 ios 00:15:23.778 SPDK bdev Controller (SPDK1 ) core 1: 5296.33 IO/s 18.88 secs/100000 ios 00:15:23.778 SPDK bdev Controller (SPDK1 ) core 2: 5478.00 IO/s 18.25 secs/100000 ios 00:15:23.778 SPDK bdev Controller (SPDK1 ) core 3: 5723.67 IO/s 17.47 secs/100000 ios 00:15:23.778 ======================================================== 00:15:23.778 00:15:23.778 13:51:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:23.778 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.778 [2024-07-14 13:51:01.623774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:23.778 Initializing NVMe Controllers 00:15:23.778 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.778 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.778 Namespace ID: 1 size: 0GB 00:15:23.778 Initialization complete. 00:15:23.778 INFO: using host memory buffer for IO 00:15:23.778 Hello world! 00:15:23.778 [2024-07-14 13:51:01.657376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.778 13:51:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:23.778 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.036 [2024-07-14 13:51:01.952348] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:25.407 Initializing NVMe Controllers 00:15:25.407 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:25.407 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:25.407 Initialization complete. Launching workers. 00:15:25.407 submit (in ns) avg, min, max = 6793.5, 3464.4, 4021853.3 00:15:25.407 complete (in ns) avg, min, max = 26178.4, 2056.7, 4019625.6 00:15:25.407 00:15:25.407 Submit histogram 00:15:25.407 ================ 00:15:25.407 Range in us Cumulative Count 00:15:25.407 3.461 - 3.484: 0.0076% ( 1) 00:15:25.407 3.484 - 3.508: 0.0151% ( 1) 00:15:25.407 3.508 - 3.532: 0.1134% ( 13) 00:15:25.407 3.532 - 3.556: 0.6654% ( 73) 00:15:25.407 3.556 - 3.579: 1.8297% ( 154) 00:15:25.407 3.579 - 3.603: 4.2870% ( 325) 00:15:25.407 3.603 - 3.627: 8.5740% ( 567) 00:15:25.407 3.627 - 3.650: 12.7779% ( 556) 00:15:25.408 3.650 - 3.674: 17.5639% ( 633) 00:15:25.408 3.674 - 3.698: 22.7204% ( 682) 00:15:25.408 3.698 - 3.721: 29.8276% ( 940) 00:15:25.408 3.721 - 3.745: 35.3546% ( 731) 00:15:25.408 3.745 - 3.769: 40.9118% ( 735) 00:15:25.408 3.769 - 3.793: 45.8491% ( 653) 00:15:25.408 3.793 - 3.816: 51.0736% ( 691) 00:15:25.408 3.816 - 3.840: 55.8143% ( 627) 00:15:25.408 3.840 - 3.864: 60.3584% ( 601) 00:15:25.408 3.864 - 3.887: 64.0783% ( 492) 00:15:25.408 3.887 - 3.911: 67.5412% ( 458) 00:15:25.408 3.911 - 3.935: 71.2007% ( 484) 00:15:25.408 3.935 - 3.959: 74.3384% ( 415) 00:15:25.408 3.959 - 3.982: 77.4081% ( 406) 00:15:25.408 3.982 - 4.006: 80.0091% ( 344) 00:15:25.408 4.006 - 4.030: 82.0883% ( 275) 00:15:25.408 4.030 - 4.053: 83.9105% ( 241) 00:15:25.408 4.053 - 4.077: 85.3546% ( 191) 00:15:25.408 4.077 - 4.101: 86.7458% ( 184) 00:15:25.408 4.101 - 4.124: 87.8421% ( 145) 00:15:25.408 4.124 - 4.148: 88.8855% ( 138) 00:15:25.408 4.148 - 4.172: 89.8306% ( 125) 00:15:25.408 4.172 - 4.196: 90.5640% ( 97) 00:15:25.408 4.196 - 4.219: 91.1235% ( 74) 00:15:25.408 4.219 - 4.243: 91.6377% ( 68) 00:15:25.408 4.243 - 4.267: 92.0157% ( 50) 00:15:25.408 4.267 - 4.290: 92.4769% ( 61) 00:15:25.408 4.290 - 4.314: 92.8247% ( 46) 00:15:25.408 4.314 - 4.338: 93.0213% ( 26) 00:15:25.408 4.338 - 4.361: 93.2784% ( 34) 00:15:25.408 4.361 - 4.385: 93.4296% ( 20) 00:15:25.408 4.385 - 4.409: 93.5808% ( 20) 00:15:25.408 4.409 - 4.433: 93.7698% ( 25) 00:15:25.408 4.433 - 4.456: 93.9816% ( 28) 00:15:25.408 4.456 - 4.480: 94.1252% ( 19) 00:15:25.408 4.480 - 4.504: 94.3218% ( 26) 00:15:25.408 4.504 - 4.527: 94.4730% ( 20) 00:15:25.408 4.527 - 4.551: 94.5864% ( 15) 00:15:25.408 4.551 - 4.575: 94.7603% ( 23) 00:15:25.408 4.575 - 4.599: 94.8662% ( 14) 00:15:25.408 4.599 - 4.622: 95.0325% ( 22) 00:15:25.408 4.622 - 4.646: 95.1459% ( 15) 00:15:25.408 4.646 - 4.670: 95.2442% ( 13) 00:15:25.408 4.670 - 4.693: 95.3425% ( 13) 00:15:25.408 4.693 - 4.717: 95.4484% ( 14) 00:15:25.408 4.717 - 4.741: 95.5618% ( 15) 00:15:25.408 4.741 - 4.764: 95.6601% ( 13) 00:15:25.408 4.764 - 4.788: 95.7281% ( 9) 00:15:25.408 4.788 - 4.812: 95.8113% ( 11) 00:15:25.408 4.812 - 4.836: 95.8869% ( 10) 00:15:25.408 4.836 - 4.859: 95.9474% ( 8) 00:15:25.408 4.859 - 4.883: 96.0305% ( 11) 00:15:25.408 4.883 - 4.907: 96.0910% ( 8) 00:15:25.408 4.907 - 4.930: 96.1591% ( 9) 00:15:25.408 4.930 - 4.954: 96.2120% ( 7) 00:15:25.408 4.954 - 4.978: 96.2649% ( 7) 00:15:25.408 4.978 - 5.001: 96.3859% ( 16) 00:15:25.408 5.001 - 5.025: 96.4842% ( 13) 00:15:25.408 5.025 - 5.049: 96.5371% ( 7) 00:15:25.408 5.049 - 5.073: 96.6127% ( 10) 00:15:25.408 5.073 - 5.096: 96.6808% ( 9) 00:15:25.408 5.096 - 5.120: 96.7564% ( 10) 00:15:25.408 5.120 - 5.144: 96.7715% ( 2) 00:15:25.408 5.144 - 5.167: 96.8169% ( 6) 00:15:25.408 5.167 - 5.191: 96.8849% ( 9) 00:15:25.408 5.191 - 5.215: 96.9378% ( 7) 00:15:25.408 5.215 - 5.239: 96.9832% ( 6) 00:15:25.408 5.239 - 5.262: 97.0361% ( 7) 00:15:25.408 5.262 - 5.286: 97.1269% ( 12) 00:15:25.408 5.286 - 5.310: 97.1571% ( 4) 00:15:25.408 5.310 - 5.333: 97.2176% ( 8) 00:15:25.408 5.333 - 5.357: 97.2705% ( 7) 00:15:25.408 5.357 - 5.381: 97.2932% ( 3) 00:15:25.408 5.381 - 5.404: 97.3235% ( 4) 00:15:25.408 5.404 - 5.428: 97.3461% ( 3) 00:15:25.408 5.428 - 5.452: 97.3688% ( 3) 00:15:25.408 5.452 - 5.476: 97.3764% ( 1) 00:15:25.408 5.476 - 5.499: 97.3839% ( 1) 00:15:25.408 5.499 - 5.523: 97.4369% ( 7) 00:15:25.408 5.523 - 5.547: 97.4595% ( 3) 00:15:25.408 5.547 - 5.570: 97.4671% ( 1) 00:15:25.408 5.570 - 5.594: 97.4822% ( 2) 00:15:25.408 5.594 - 5.618: 97.5125% ( 4) 00:15:25.408 5.618 - 5.641: 97.5427% ( 4) 00:15:25.408 5.665 - 5.689: 97.5654% ( 3) 00:15:25.408 5.689 - 5.713: 97.5881% ( 3) 00:15:25.408 5.713 - 5.736: 97.6108% ( 3) 00:15:25.408 5.736 - 5.760: 97.6259% ( 2) 00:15:25.408 5.760 - 5.784: 97.6561% ( 4) 00:15:25.408 5.784 - 5.807: 97.6939% ( 5) 00:15:25.408 5.807 - 5.831: 97.7091% ( 2) 00:15:25.408 5.831 - 5.855: 97.7544% ( 6) 00:15:25.408 5.855 - 5.879: 97.7620% ( 1) 00:15:25.408 5.879 - 5.902: 97.7847% ( 3) 00:15:25.408 5.902 - 5.926: 97.8149% ( 4) 00:15:25.408 5.926 - 5.950: 97.8300% ( 2) 00:15:25.408 5.950 - 5.973: 97.8678% ( 5) 00:15:25.408 5.973 - 5.997: 97.9056% ( 5) 00:15:25.408 5.997 - 6.021: 97.9283% ( 3) 00:15:25.408 6.021 - 6.044: 97.9661% ( 5) 00:15:25.408 6.044 - 6.068: 97.9737% ( 1) 00:15:25.408 6.068 - 6.116: 97.9888% ( 2) 00:15:25.408 6.116 - 6.163: 98.0266% ( 5) 00:15:25.408 6.163 - 6.210: 98.0493% ( 3) 00:15:25.408 6.210 - 6.258: 98.0720% ( 3) 00:15:25.408 6.258 - 6.305: 98.0871% ( 2) 00:15:25.408 6.305 - 6.353: 98.1022% ( 2) 00:15:25.408 6.353 - 6.400: 98.1400% ( 5) 00:15:25.408 6.400 - 6.447: 98.1476% ( 1) 00:15:25.408 6.447 - 6.495: 98.1627% ( 2) 00:15:25.408 6.495 - 6.542: 98.1778% ( 2) 00:15:25.408 6.542 - 6.590: 98.2005% ( 3) 00:15:25.408 6.590 - 6.637: 98.2081% ( 1) 00:15:25.408 6.684 - 6.732: 98.2232% ( 2) 00:15:25.408 6.732 - 6.779: 98.2459% ( 3) 00:15:25.408 6.779 - 6.827: 98.2610% ( 2) 00:15:25.408 6.827 - 6.874: 98.2686% ( 1) 00:15:25.408 6.921 - 6.969: 98.2761% ( 1) 00:15:25.408 7.016 - 7.064: 98.2988% ( 3) 00:15:25.408 7.064 - 7.111: 98.3064% ( 1) 00:15:25.408 7.111 - 7.159: 98.3139% ( 1) 00:15:25.408 7.159 - 7.206: 98.3215% ( 1) 00:15:25.408 7.253 - 7.301: 98.3290% ( 1) 00:15:25.408 7.348 - 7.396: 98.3442% ( 2) 00:15:25.408 7.396 - 7.443: 98.3517% ( 1) 00:15:25.408 7.490 - 7.538: 98.3593% ( 1) 00:15:25.408 7.538 - 7.585: 98.3669% ( 1) 00:15:25.408 7.585 - 7.633: 98.3744% ( 1) 00:15:25.408 7.633 - 7.680: 98.3820% ( 1) 00:15:25.408 7.680 - 7.727: 98.3895% ( 1) 00:15:25.408 7.727 - 7.775: 98.3971% ( 1) 00:15:25.408 7.775 - 7.822: 98.4122% ( 2) 00:15:25.408 7.822 - 7.870: 98.4198% ( 1) 00:15:25.408 7.870 - 7.917: 98.4273% ( 1) 00:15:25.408 7.917 - 7.964: 98.4349% ( 1) 00:15:25.408 7.964 - 8.012: 98.4425% ( 1) 00:15:25.408 8.012 - 8.059: 98.4500% ( 1) 00:15:25.408 8.154 - 8.201: 98.4576% ( 1) 00:15:25.408 8.249 - 8.296: 98.4727% ( 2) 00:15:25.408 8.391 - 8.439: 98.4803% ( 1) 00:15:25.408 8.439 - 8.486: 98.4878% ( 1) 00:15:25.408 8.628 - 8.676: 98.4954% ( 1) 00:15:25.408 8.818 - 8.865: 98.5105% ( 2) 00:15:25.408 8.960 - 9.007: 98.5181% ( 1) 00:15:25.408 9.007 - 9.055: 98.5256% ( 1) 00:15:25.408 9.197 - 9.244: 98.5483% ( 3) 00:15:25.408 9.434 - 9.481: 98.5559% ( 1) 00:15:25.408 9.576 - 9.624: 98.5710% ( 2) 00:15:25.408 9.813 - 9.861: 98.5861% ( 2) 00:15:25.408 9.861 - 9.908: 98.6012% ( 2) 00:15:25.408 9.908 - 9.956: 98.6164% ( 2) 00:15:25.408 9.956 - 10.003: 98.6239% ( 1) 00:15:25.408 10.003 - 10.050: 98.6315% ( 1) 00:15:25.408 10.050 - 10.098: 98.6466% ( 2) 00:15:25.408 10.145 - 10.193: 98.6542% ( 1) 00:15:25.408 10.287 - 10.335: 98.6693% ( 2) 00:15:25.408 10.477 - 10.524: 98.6844% ( 2) 00:15:25.408 10.524 - 10.572: 98.6920% ( 1) 00:15:25.408 10.619 - 10.667: 98.7071% ( 2) 00:15:25.408 10.667 - 10.714: 98.7147% ( 1) 00:15:25.408 10.714 - 10.761: 98.7298% ( 2) 00:15:25.408 10.904 - 10.951: 98.7449% ( 2) 00:15:25.408 10.951 - 10.999: 98.7676% ( 3) 00:15:25.408 10.999 - 11.046: 98.7751% ( 1) 00:15:25.408 11.093 - 11.141: 98.7827% ( 1) 00:15:25.408 11.141 - 11.188: 98.7903% ( 1) 00:15:25.408 11.188 - 11.236: 98.7978% ( 1) 00:15:25.408 11.615 - 11.662: 98.8054% ( 1) 00:15:25.408 11.662 - 11.710: 98.8129% ( 1) 00:15:25.408 11.757 - 11.804: 98.8281% ( 2) 00:15:25.408 11.899 - 11.947: 98.8356% ( 1) 00:15:25.408 11.994 - 12.041: 98.8432% ( 1) 00:15:25.408 12.136 - 12.231: 98.8507% ( 1) 00:15:25.408 12.231 - 12.326: 98.8583% ( 1) 00:15:25.408 12.326 - 12.421: 98.8659% ( 1) 00:15:25.408 12.516 - 12.610: 98.8734% ( 1) 00:15:25.408 12.895 - 12.990: 98.8886% ( 2) 00:15:25.408 13.369 - 13.464: 98.8961% ( 1) 00:15:25.408 13.559 - 13.653: 98.9037% ( 1) 00:15:25.408 13.653 - 13.748: 98.9112% ( 1) 00:15:25.408 13.748 - 13.843: 98.9339% ( 3) 00:15:25.408 13.843 - 13.938: 98.9415% ( 1) 00:15:25.408 13.938 - 14.033: 98.9566% ( 2) 00:15:25.408 14.033 - 14.127: 98.9717% ( 2) 00:15:25.408 14.127 - 14.222: 98.9793% ( 1) 00:15:25.408 14.222 - 14.317: 98.9944% ( 2) 00:15:25.408 14.317 - 14.412: 99.0095% ( 2) 00:15:25.408 14.412 - 14.507: 99.0171% ( 1) 00:15:25.408 14.507 - 14.601: 99.0246% ( 1) 00:15:25.408 14.696 - 14.791: 99.0322% ( 1) 00:15:25.408 14.791 - 14.886: 99.0398% ( 1) 00:15:25.408 14.886 - 14.981: 99.0473% ( 1) 00:15:25.408 14.981 - 15.076: 99.0625% ( 2) 00:15:25.408 16.687 - 16.782: 99.0700% ( 1) 00:15:25.408 17.067 - 17.161: 99.0851% ( 2) 00:15:25.408 17.351 - 17.446: 99.0927% ( 1) 00:15:25.408 17.541 - 17.636: 99.1078% ( 2) 00:15:25.408 17.636 - 17.730: 99.1305% ( 3) 00:15:25.408 17.730 - 17.825: 99.1456% ( 2) 00:15:25.408 17.825 - 17.920: 99.1910% ( 6) 00:15:25.408 17.920 - 18.015: 99.2288% ( 5) 00:15:25.408 18.015 - 18.110: 99.2742% ( 6) 00:15:25.408 18.110 - 18.204: 99.3271% ( 7) 00:15:25.408 18.204 - 18.299: 99.3573% ( 4) 00:15:25.408 18.299 - 18.394: 99.3724% ( 2) 00:15:25.408 18.394 - 18.489: 99.4254% ( 7) 00:15:25.408 18.489 - 18.584: 99.4632% ( 5) 00:15:25.408 18.584 - 18.679: 99.4934% ( 4) 00:15:25.408 18.679 - 18.773: 99.5615% ( 9) 00:15:25.408 18.773 - 18.868: 99.5766% ( 2) 00:15:25.408 18.868 - 18.963: 99.6144% ( 5) 00:15:25.408 18.963 - 19.058: 99.6598% ( 6) 00:15:25.408 19.058 - 19.153: 99.6749% ( 2) 00:15:25.408 19.153 - 19.247: 99.6900% ( 2) 00:15:25.408 19.247 - 19.342: 99.7051% ( 2) 00:15:25.408 19.437 - 19.532: 99.7202% ( 2) 00:15:25.408 19.627 - 19.721: 99.7278% ( 1) 00:15:25.408 19.721 - 19.816: 99.7429% ( 2) 00:15:25.408 19.816 - 19.911: 99.7656% ( 3) 00:15:25.408 19.911 - 20.006: 99.7883% ( 3) 00:15:25.408 20.006 - 20.101: 99.8034% ( 2) 00:15:25.408 20.101 - 20.196: 99.8110% ( 1) 00:15:25.408 20.196 - 20.290: 99.8185% ( 1) 00:15:25.408 20.385 - 20.480: 99.8261% ( 1) 00:15:25.408 20.480 - 20.575: 99.8337% ( 1) 00:15:25.408 21.049 - 21.144: 99.8412% ( 1) 00:15:25.408 21.239 - 21.333: 99.8488% ( 1) 00:15:25.408 21.428 - 21.523: 99.8563% ( 1) 00:15:25.408 23.230 - 23.324: 99.8639% ( 1) 00:15:25.408 25.031 - 25.221: 99.8715% ( 1) 00:15:25.408 25.221 - 25.410: 99.8790% ( 1) 00:15:25.408 25.410 - 25.600: 99.8866% ( 1) 00:15:25.408 26.169 - 26.359: 99.8941% ( 1) 00:15:25.408 27.117 - 27.307: 99.9093% ( 2) 00:15:25.408 27.876 - 28.065: 99.9168% ( 1) 00:15:25.408 29.961 - 30.151: 99.9244% ( 1) 00:15:25.408 30.910 - 31.099: 99.9320% ( 1) 00:15:25.408 3980.705 - 4004.978: 99.9849% ( 7) 00:15:25.408 4004.978 - 4029.250: 100.0000% ( 2) 00:15:25.408 00:15:25.408 Complete histogram 00:15:25.408 ================== 00:15:25.408 Range in us Cumulative Count 00:15:25.408 2.050 - 2.062: 0.1134% ( 15) 00:15:25.408 2.062 - 2.074: 8.1279% ( 1060) 00:15:25.408 2.074 - 2.086: 12.1881% ( 537) 00:15:25.408 2.086 - 2.098: 13.3147% ( 149) 00:15:25.408 2.098 - 2.110: 21.1704% ( 1039) 00:15:25.408 2.110 - 2.121: 23.6655% ( 330) 00:15:25.408 2.121 - 2.133: 27.7635% ( 542) 00:15:25.408 2.133 - 2.145: 45.5391% ( 2351) 00:15:25.408 2.145 - 2.157: 50.1437% ( 609) 00:15:25.408 2.157 - 2.169: 54.2341% ( 541) 00:15:25.408 2.169 - 2.181: 62.6871% ( 1118) 00:15:25.408 2.181 - 2.193: 65.1369% ( 324) 00:15:25.408 2.193 - 2.204: 67.4580% ( 307) 00:15:25.409 2.204 - 2.216: 74.4443% ( 924) 00:15:25.409 2.216 - 2.228: 76.4781% ( 269) 00:15:25.409 2.228 - 2.240: 78.1340% ( 219) 00:15:25.409 2.240 - 2.252: 81.8993% ( 498) 00:15:25.409 2.252 - 2.264: 82.9578% ( 140) 00:15:25.409 2.264 - 2.276: 83.5249% ( 75) 00:15:25.409 2.276 - 2.287: 84.2961% ( 102) 00:15:25.409 2.287 - 2.299: 85.4983% ( 159) 00:15:25.409 2.299 - 2.311: 86.8290% ( 176) 00:15:25.409 2.311 - 2.323: 87.9404% ( 147) 00:15:25.409 2.323 - 2.335: 88.7116% ( 102) 00:15:25.409 2.335 - 2.347: 89.3089% ( 79) 00:15:25.409 2.347 - 2.359: 89.7550% ( 59) 00:15:25.409 2.359 - 2.370: 90.1558% ( 53) 00:15:25.409 2.370 - 2.382: 90.5867% ( 57) 00:15:25.409 2.382 - 2.394: 90.9345% ( 46) 00:15:25.409 2.394 - 2.406: 91.3731% ( 58) 00:15:25.409 2.406 - 2.418: 91.7209% ( 46) 00:15:25.409 2.418 - 2.430: 92.0157% ( 39) 00:15:25.409 2.430 - 2.441: 92.3408% ( 43) 00:15:25.409 2.441 - 2.453: 92.6055% ( 35) 00:15:25.409 2.453 - 2.465: 92.8777% ( 36) 00:15:25.409 2.465 - 2.477: 93.1045% ( 30) 00:15:25.409 2.477 - 2.489: 93.4372% ( 44) 00:15:25.409 2.489 - 2.501: 93.6640% ( 30) 00:15:25.409 2.501 - 2.513: 93.9740% ( 41) 00:15:25.409 2.513 - 2.524: 94.2159% ( 32) 00:15:25.409 2.524 - 2.536: 94.3823% ( 22) 00:15:25.409 2.536 - 2.548: 94.4957% ( 15) 00:15:25.409 2.548 - 2.560: 94.6772% ( 24) 00:15:25.409 2.560 - 2.572: 94.9418% ( 35) 00:15:25.409 2.572 - 2.584: 95.0628% ( 16) 00:15:25.409 2.584 - 2.596: 95.2896% ( 30) 00:15:25.409 2.596 - 2.607: 95.4484% ( 21) 00:15:25.409 2.607 - 2.619: 95.6223% ( 23) 00:15:25.409 2.619 - 2.631: 95.7810% ( 21) 00:15:25.409 2.631 - 2.643: 95.9549% ( 23) 00:15:25.409 2.643 - 2.655: 96.0835% ( 17) 00:15:25.409 2.655 - 2.667: 96.2120% ( 17) 00:15:25.409 2.667 - 2.679: 96.3254% ( 15) 00:15:25.409 2.679 - 2.690: 96.4086% ( 11) 00:15:25.409 2.690 - 2.702: 96.4540% ( 6) 00:15:25.409 2.702 - 2.714: 96.5522% ( 13) 00:15:25.409 2.714 - 2.726: 96.6279% ( 10) 00:15:25.409 2.726 - 2.738: 96.7186% ( 12) 00:15:25.409 2.738 - 2.750: 96.7715% ( 7) 00:15:25.409 2.750 - 2.761: 96.8698% ( 13) 00:15:25.409 2.761 - 2.773: 96.9530% ( 11) 00:15:25.409 2.773 - 2.785: 97.0286% ( 10) 00:15:25.409 2.785 - 2.797: 97.0739% ( 6) 00:15:25.409 2.797 - 2.809: 97.1193% ( 6) 00:15:25.409 2.809 - 2.821: 97.1722% ( 7) 00:15:25.409 2.821 - 2.833: 97.2252% ( 7) 00:15:25.409 2.833 - 2.844: 97.2630% ( 5) 00:15:25.409 2.844 - 2.856: 97.2932% ( 4) 00:15:25.409 2.856 - 2.868: 97.3386% ( 6) 00:15:25.409 2.868 - 2.880: 97.3613% ( 3) 00:15:25.409 2.880 - 2.892: 97.4293% ( 9) 00:15:25.409 2.892 - 2.904: 97.4595% ( 4) 00:15:25.409 2.904 - 2.916: 97.4822% ( 3) 00:15:25.409 2.916 - 2.927: 97.5200% ( 5) 00:15:25.409 2.927 - 2.939: 97.5427% ( 3) 00:15:25.409 2.939 - 2.951: 97.5956% ( 7) 00:15:25.409 2.951 - 2.963: 97.6259% ( 4) 00:15:25.409 2.963 - 2.975: 97.6486% ( 3) 00:15:25.409 2.975 - 2.987: 97.6864% ( 5) 00:15:25.409 2.987 - 2.999: 97.6939% ( 1) 00:15:25.409 2.999 - 3.010: 97.7166% ( 3) 00:15:25.409 3.010 - 3.022: 97.7620% ( 6) 00:15:25.409 3.022 - 3.034: 97.7771% ( 2) 00:15:25.409 3.034 - 3.058: 97.7922% ( 2) 00:15:25.409 3.058 - 3.081: 97.8603% ( 9) 00:15:25.409 3.081 - 3.105: 97.8754% ( 2) 00:15:25.409 3.129 - 3.153: 97.8981% ( 3) 00:15:25.409 3.176 - 3.200: 97.9283% ( 4) 00:15:25.409 3.200 - 3.224: 97.9359% ( 1) 00:15:25.409 3.224 - 3.247: 97.9661% ( 4) 00:15:25.409 3.247 - 3.271: 97.9888% ( 3) 00:15:25.409 3.271 - 3.295: 98.0115% ( 3) 00:15:25.409 3.295 - 3.319: 98.0417% ( 4) 00:15:25.409 3.342 - 3.366: 98.0493% ( 1) 00:15:25.409 3.366 - 3.390: 98.0947% ( 6) 00:15:25.409 3.390 - 3.413: 98.1173% ( 3) 00:15:25.409 3.413 - 3.437: 98.1551% ( 5) 00:15:25.409 3.437 - 3.461: 98.2308% ( 10) 00:15:25.409 3.461 - 3.484: 98.2383% ( 1) 00:15:25.409 3.484 - 3.508: 98.2534% ( 2) 00:15:25.409 3.508 - 3.532: 98.2837% ( 4) 00:15:25.409 3.532 - 3.556: 98.3064% ( 3) 00:15:25.409 3.556 - 3.579: 98.3139% ( 1) 00:15:25.409 3.579 - 3.603: 98.3215% ( 1) 00:15:25.409 3.603 - 3.627: 98.3593% ( 5) 00:15:25.409 3.627 - 3.650: 98.3895% ( 4) 00:15:25.409 3.650 - 3.674: 98.4122% ( 3) 00:15:25.409 3.674 - 3.698: 98.4349% ( 3) 00:15:25.409 3.698 - 3.721: 98.4651% ( 4) 00:15:25.409 3.721 - 3.745: 98.5029% ( 5) 00:15:25.409 3.745 - 3.769: 98.5256% ( 3) 00:15:25.409 3.769 - 3.793: 98.5332% ( 1) 00:15:25.409 3.793 - 3.816: 98.5483% ( 2) 00:15:25.409 3.816 - 3.840: 98.5634% ( 2) 00:15:25.409 3.840 - 3.864: 98.5710% ( 1) 00:15:25.409 3.864 - 3.887: 98.6012% ( 4) 00:15:25.409 3.911 - 3.935: 98.6239% ( 3) 00:15:25.409 3.935 - 3.959: 98.6315% ( 1) 00:15:25.409 3.959 - 3.982: 98.6466% ( 2) 00:15:25.409 4.006 - 4.030: 98.6617% ( 2) 00:15:25.409 4.053 - 4.077: 98.6844% ( 3) 00:15:25.409 4.077 - 4.101: 98.7071% ( 3) 00:15:25.409 4.101 - 4.124: 98.7147% ( 1) 00:15:25.409 4.267 - 4.290: 98.7298% ( 2) 00:15:25.409 4.290 - 4.314: 98.7449% ( 2) 00:15:25.409 4.361 - 4.385: 98.7525% ( 1) 00:15:25.409 4.385 - 4.409: 98.7600% ( 1) 00:15:25.409 4.433 - 4.456: 98.7676% ( 1) 00:15:25.409 4.504 - 4.527: 98.7751% ( 1) 00:15:25.409 4.551 - 4.575: 98.7827% ( 1) 00:15:25.409 4.670 - 4.693: 98.7903% ( 1) 00:15:25.409 4.693 - 4.717: 98.7978% ( 1) 00:15:25.409 5.096 - 5.120: 98.8054% ( 1) 00:15:25.409 5.167 - 5.191: 98.8129% ( 1) 00:15:25.409 5.381 - 5.404: 98.8205% ( 1) 00:15:25.409 6.068 - 6.116: 98.8281% ( 1) 00:15:25.409 6.210 - 6.258: 98.8356% ( 1) 00:15:25.409 6.495 - 6.542: 98.8507% ( 2) 00:15:25.409 6.827 - 6.874: 98.8583% ( 1) 00:15:25.409 6.874 - 6.921: 98.8734% ( 2) 00:15:25.409 7.253 - 7.301: 98.8810% ( 1) 00:15:25.409 7.680 - 7.727: 98.8961% ( 2) 00:15:25.409 8.059 - 8.107: 98.9037% ( 1) 00:15:25.409 8.107 - 8.154: 98.9112% ( 1) 00:15:25.409 8.533 - 8.581: 98.9188% ( 1) 00:15:25.409 8.581 - 8.628: 98.9264% ( 1) 00:15:25.409 8.628 - 8.676: 98.9339% ( 1) 00:15:25.409 8.865 - 8.913: 98.9415% ( 1) 00:15:25.409 8.960 - 9.007: 98.9490% ( 1) 00:15:25.409 9.055 - 9.102: 98.9642% ( 2) 00:15:25.409 9.102 - 9.150: 98.9717% ( 1) 00:15:25.409 9.576 - 9.624: 98.9868% ( 2) 00:15:25.409 10.003 - 10.050: 98.9944% ( 1) 00:15:25.409 10.619 - 10.667: 99.0020% ( 1) 00:15:25.409 11.236 - 11.283: 99.0095% ( 1) 00:15:25.409 11.473 - 11.520: 99.0171% ( 1) 00:15:25.409 15.455 - 15.550: 99.0322% ( 2) 00:15:25.409 15.739 - 15.834: 99.0398% ( 1) 00:15:25.409 15.834 - 15.929: 99.0625% ( 3) 00:15:25.409 15.929 - 16.024: 99.0851% ( 3) 00:15:25.409 16.213 - 16.308: 99.1003% ( 2) 00:15:25.409 16.403 - 16.498: 99.1305% ( 4) 00:15:25.409 16.498 - 16.593: 99.1607% ( 4) 00:15:25.409 16.593 - 16.687: 99.1985% ( 5) 00:15:25.409 16.782 - 16.877: 99.2061% ( 1) 00:15:25.409 16.877 - 16.972: 99.2439% ( 5) 00:15:25.409 16.972 - 17.067: 99.2817% ( 5) 00:15:25.409 17.161 - 17.256: 99.3120% ( 4) 00:15:25.409 17.351 - 17.446: 99.3195% ( 1) 00:15:25.409 17.825 - 17.920: 99.3271% ( 1) 00:15:25.409 18.015 - 18.110: 99.3422% ( 2) 00:15:25.409 18.110 - 18.204: 99.3498% ( 1) 00:15:25.409 18.679 - 18.773: 99.3573% ( 1) 00:15:25.409 18.773 - 18.868: 99.3649% ( 1) 00:15:25.409 19.721 - 19.816: 99.3724% ( 1) 00:15:25.409 19.911 - 20.006: 99.3800% ( 1) 00:15:25.409 21.239 - 21.333: 99.3876% ( 1) 00:15:25.409 22.756 - 22.850: 99.3951% ( 1) 00:15:25.409 23.609 - 23.704: 99.4027% ( [2024-07-14 13:51:02.971599] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:25.409 1) 00:15:25.409 3980.705 - 4004.978: 99.8866% ( 64) 00:15:25.409 4004.978 - 4029.250: 100.0000% ( 15) 00:15:25.409 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:25.409 [ 00:15:25.409 { 00:15:25.409 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:25.409 "subtype": "Discovery", 00:15:25.409 "listen_addresses": [], 00:15:25.409 "allow_any_host": true, 00:15:25.409 "hosts": [] 00:15:25.409 }, 00:15:25.409 { 00:15:25.409 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:25.409 "subtype": "NVMe", 00:15:25.409 "listen_addresses": [ 00:15:25.409 { 00:15:25.409 "trtype": "VFIOUSER", 00:15:25.409 "adrfam": "IPv4", 00:15:25.409 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:25.409 "trsvcid": "0" 00:15:25.409 } 00:15:25.409 ], 00:15:25.409 "allow_any_host": true, 00:15:25.409 "hosts": [], 00:15:25.409 "serial_number": "SPDK1", 00:15:25.409 "model_number": "SPDK bdev Controller", 00:15:25.409 "max_namespaces": 32, 00:15:25.409 "min_cntlid": 1, 00:15:25.409 "max_cntlid": 65519, 00:15:25.409 "namespaces": [ 00:15:25.409 { 00:15:25.409 "nsid": 1, 00:15:25.409 "bdev_name": "Malloc1", 00:15:25.409 "name": "Malloc1", 00:15:25.409 "nguid": "2D2CF0532364490B8CA02A82C181FA5D", 00:15:25.409 "uuid": "2d2cf053-2364-490b-8ca0-2a82c181fa5d" 00:15:25.409 } 00:15:25.409 ] 00:15:25.409 }, 00:15:25.409 { 00:15:25.409 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:25.409 "subtype": "NVMe", 00:15:25.409 "listen_addresses": [ 00:15:25.409 { 00:15:25.409 "trtype": "VFIOUSER", 00:15:25.409 "adrfam": "IPv4", 00:15:25.409 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:25.409 "trsvcid": "0" 00:15:25.409 } 00:15:25.409 ], 00:15:25.409 "allow_any_host": true, 00:15:25.409 "hosts": [], 00:15:25.409 "serial_number": "SPDK2", 00:15:25.409 "model_number": "SPDK bdev Controller", 00:15:25.409 "max_namespaces": 32, 00:15:25.409 "min_cntlid": 1, 00:15:25.409 "max_cntlid": 65519, 00:15:25.409 "namespaces": [ 00:15:25.409 { 00:15:25.409 "nsid": 1, 00:15:25.409 "bdev_name": "Malloc2", 00:15:25.409 "name": "Malloc2", 00:15:25.409 "nguid": "C5F53BF8A3E14A338234470C89E171FC", 00:15:25.409 "uuid": "c5f53bf8-a3e1-4a33-8234-470c89e171fc" 00:15:25.409 } 00:15:25.409 ] 00:15:25.409 } 00:15:25.409 ] 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1418855 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:25.409 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:25.409 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.665 [2024-07-14 13:51:03.478691] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:25.665 Malloc3 00:15:25.665 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:25.921 [2024-07-14 13:51:03.836280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:25.921 13:51:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:25.921 Asynchronous Event Request test 00:15:25.921 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:25.921 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:25.921 Registering asynchronous event callbacks... 00:15:25.921 Starting namespace attribute notice tests for all controllers... 00:15:25.921 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:25.921 aer_cb - Changed Namespace 00:15:25.921 Cleaning up... 00:15:26.177 [ 00:15:26.177 { 00:15:26.177 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:26.177 "subtype": "Discovery", 00:15:26.177 "listen_addresses": [], 00:15:26.177 "allow_any_host": true, 00:15:26.177 "hosts": [] 00:15:26.177 }, 00:15:26.177 { 00:15:26.177 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:26.177 "subtype": "NVMe", 00:15:26.177 "listen_addresses": [ 00:15:26.177 { 00:15:26.177 "trtype": "VFIOUSER", 00:15:26.177 "adrfam": "IPv4", 00:15:26.177 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:26.177 "trsvcid": "0" 00:15:26.177 } 00:15:26.177 ], 00:15:26.177 "allow_any_host": true, 00:15:26.177 "hosts": [], 00:15:26.177 "serial_number": "SPDK1", 00:15:26.177 "model_number": "SPDK bdev Controller", 00:15:26.177 "max_namespaces": 32, 00:15:26.177 "min_cntlid": 1, 00:15:26.177 "max_cntlid": 65519, 00:15:26.177 "namespaces": [ 00:15:26.177 { 00:15:26.177 "nsid": 1, 00:15:26.177 "bdev_name": "Malloc1", 00:15:26.177 "name": "Malloc1", 00:15:26.177 "nguid": "2D2CF0532364490B8CA02A82C181FA5D", 00:15:26.177 "uuid": "2d2cf053-2364-490b-8ca0-2a82c181fa5d" 00:15:26.177 }, 00:15:26.177 { 00:15:26.177 "nsid": 2, 00:15:26.177 "bdev_name": "Malloc3", 00:15:26.177 "name": "Malloc3", 00:15:26.177 "nguid": "704D55BCD7214221A922168B5298015A", 00:15:26.177 "uuid": "704d55bc-d721-4221-a922-168b5298015a" 00:15:26.177 } 00:15:26.177 ] 00:15:26.177 }, 00:15:26.178 { 00:15:26.178 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:26.178 "subtype": "NVMe", 00:15:26.178 "listen_addresses": [ 00:15:26.178 { 00:15:26.178 "trtype": "VFIOUSER", 00:15:26.178 "adrfam": "IPv4", 00:15:26.178 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:26.178 "trsvcid": "0" 00:15:26.178 } 00:15:26.178 ], 00:15:26.178 "allow_any_host": true, 00:15:26.178 "hosts": [], 00:15:26.178 "serial_number": "SPDK2", 00:15:26.178 "model_number": "SPDK bdev Controller", 00:15:26.178 "max_namespaces": 32, 00:15:26.178 "min_cntlid": 1, 00:15:26.178 "max_cntlid": 65519, 00:15:26.178 "namespaces": [ 00:15:26.178 { 00:15:26.178 "nsid": 1, 00:15:26.178 "bdev_name": "Malloc2", 00:15:26.178 "name": "Malloc2", 00:15:26.178 "nguid": "C5F53BF8A3E14A338234470C89E171FC", 00:15:26.178 "uuid": "c5f53bf8-a3e1-4a33-8234-470c89e171fc" 00:15:26.178 } 00:15:26.178 ] 00:15:26.178 } 00:15:26.178 ] 00:15:26.178 13:51:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1418855 00:15:26.178 13:51:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:26.178 13:51:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:26.178 13:51:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:26.178 13:51:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:26.178 [2024-07-14 13:51:04.128056] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:26.178 [2024-07-14 13:51:04.128098] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418992 ] 00:15:26.178 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.437 [2024-07-14 13:51:04.163331] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:26.437 [2024-07-14 13:51:04.170207] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:26.437 [2024-07-14 13:51:04.170237] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3c9e49d000 00:15:26.437 [2024-07-14 13:51:04.173893] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.437 [2024-07-14 13:51:04.174213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.437 [2024-07-14 13:51:04.175236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.437 [2024-07-14 13:51:04.176233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:26.437 [2024-07-14 13:51:04.177240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:26.437 [2024-07-14 13:51:04.178240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.437 [2024-07-14 13:51:04.179253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:26.437 [2024-07-14 13:51:04.180262] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:26.437 [2024-07-14 13:51:04.181269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:26.437 [2024-07-14 13:51:04.181292] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3c9d24f000 00:15:26.437 [2024-07-14 13:51:04.182416] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:26.437 [2024-07-14 13:51:04.197185] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:26.437 [2024-07-14 13:51:04.197238] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:26.437 [2024-07-14 13:51:04.199340] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:26.437 [2024-07-14 13:51:04.199389] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:26.437 [2024-07-14 13:51:04.199469] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:26.437 [2024-07-14 13:51:04.199491] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:26.437 [2024-07-14 13:51:04.199506] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:26.437 [2024-07-14 13:51:04.200347] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:26.437 [2024-07-14 13:51:04.200372] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:26.437 [2024-07-14 13:51:04.200385] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:26.437 [2024-07-14 13:51:04.201356] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:26.437 [2024-07-14 13:51:04.201376] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:26.437 [2024-07-14 13:51:04.201390] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:26.437 [2024-07-14 13:51:04.202360] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:26.437 [2024-07-14 13:51:04.202380] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:26.437 [2024-07-14 13:51:04.203369] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:26.437 [2024-07-14 13:51:04.203388] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:26.437 [2024-07-14 13:51:04.203397] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:26.437 [2024-07-14 13:51:04.203413] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:26.437 [2024-07-14 13:51:04.203524] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:26.437 [2024-07-14 13:51:04.203536] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:26.437 [2024-07-14 13:51:04.203544] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:26.437 [2024-07-14 13:51:04.207890] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:26.437 [2024-07-14 13:51:04.208394] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:26.437 [2024-07-14 13:51:04.209406] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:26.437 [2024-07-14 13:51:04.210396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.437 [2024-07-14 13:51:04.210482] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:26.437 [2024-07-14 13:51:04.211417] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:26.437 [2024-07-14 13:51:04.211437] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:26.437 [2024-07-14 13:51:04.211446] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:26.437 [2024-07-14 13:51:04.211469] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:26.437 [2024-07-14 13:51:04.211482] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:26.437 [2024-07-14 13:51:04.211504] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:26.437 [2024-07-14 13:51:04.211514] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.437 [2024-07-14 13:51:04.211531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.437 [2024-07-14 13:51:04.217903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:26.437 [2024-07-14 13:51:04.217932] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:26.437 [2024-07-14 13:51:04.217943] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:26.437 [2024-07-14 13:51:04.217951] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:26.437 [2024-07-14 13:51:04.217958] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:26.437 [2024-07-14 13:51:04.217966] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:26.437 [2024-07-14 13:51:04.217974] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:26.437 [2024-07-14 13:51:04.217982] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:26.437 [2024-07-14 13:51:04.217998] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:26.437 [2024-07-14 13:51:04.218015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:26.437 [2024-07-14 13:51:04.225887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:26.437 [2024-07-14 13:51:04.225913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.437 [2024-07-14 13:51:04.225927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.437 [2024-07-14 13:51:04.225939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.437 [2024-07-14 13:51:04.225951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.437 [2024-07-14 13:51:04.225959] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.225975] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.225990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.233903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.233922] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:26.438 [2024-07-14 13:51:04.233932] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.233943] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.233957] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.233972] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.241896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.241971] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.241988] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.242001] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:26.438 [2024-07-14 13:51:04.242010] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:26.438 [2024-07-14 13:51:04.242020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.249892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.249921] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:26.438 [2024-07-14 13:51:04.249938] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.249956] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.249969] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:26.438 [2024-07-14 13:51:04.249978] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.438 [2024-07-14 13:51:04.249988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.257890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.257919] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.257936] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.257949] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:26.438 [2024-07-14 13:51:04.257958] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.438 [2024-07-14 13:51:04.257968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.265889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.265910] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.265923] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.265938] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.265949] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.265957] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.265966] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:26.438 [2024-07-14 13:51:04.265974] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:26.438 [2024-07-14 13:51:04.265982] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:26.438 [2024-07-14 13:51:04.266013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.273888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.273915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.281887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.281913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.289903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.289929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.297905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.297932] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:26.438 [2024-07-14 13:51:04.297942] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:26.438 [2024-07-14 13:51:04.297949] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:26.438 [2024-07-14 13:51:04.297955] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:26.438 [2024-07-14 13:51:04.297965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:26.438 [2024-07-14 13:51:04.297977] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:26.438 [2024-07-14 13:51:04.297985] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:26.438 [2024-07-14 13:51:04.297994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.298005] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:26.438 [2024-07-14 13:51:04.298014] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:26.438 [2024-07-14 13:51:04.298023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.298035] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:26.438 [2024-07-14 13:51:04.298043] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:26.438 [2024-07-14 13:51:04.298052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:26.438 [2024-07-14 13:51:04.305889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.305917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.305933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:26.438 [2024-07-14 13:51:04.305948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:26.438 ===================================================== 00:15:26.438 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:26.438 ===================================================== 00:15:26.438 Controller Capabilities/Features 00:15:26.438 ================================ 00:15:26.438 Vendor ID: 4e58 00:15:26.438 Subsystem Vendor ID: 4e58 00:15:26.438 Serial Number: SPDK2 00:15:26.438 Model Number: SPDK bdev Controller 00:15:26.438 Firmware Version: 24.05.1 00:15:26.438 Recommended Arb Burst: 6 00:15:26.438 IEEE OUI Identifier: 8d 6b 50 00:15:26.438 Multi-path I/O 00:15:26.438 May have multiple subsystem ports: Yes 00:15:26.438 May have multiple controllers: Yes 00:15:26.438 Associated with SR-IOV VF: No 00:15:26.438 Max Data Transfer Size: 131072 00:15:26.438 Max Number of Namespaces: 32 00:15:26.438 Max Number of I/O Queues: 127 00:15:26.438 NVMe Specification Version (VS): 1.3 00:15:26.438 NVMe Specification Version (Identify): 1.3 00:15:26.438 Maximum Queue Entries: 256 00:15:26.438 Contiguous Queues Required: Yes 00:15:26.438 Arbitration Mechanisms Supported 00:15:26.438 Weighted Round Robin: Not Supported 00:15:26.438 Vendor Specific: Not Supported 00:15:26.438 Reset Timeout: 15000 ms 00:15:26.438 Doorbell Stride: 4 bytes 00:15:26.438 NVM Subsystem Reset: Not Supported 00:15:26.438 Command Sets Supported 00:15:26.438 NVM Command Set: Supported 00:15:26.438 Boot Partition: Not Supported 00:15:26.438 Memory Page Size Minimum: 4096 bytes 00:15:26.438 Memory Page Size Maximum: 4096 bytes 00:15:26.438 Persistent Memory Region: Not Supported 00:15:26.438 Optional Asynchronous Events Supported 00:15:26.438 Namespace Attribute Notices: Supported 00:15:26.438 Firmware Activation Notices: Not Supported 00:15:26.438 ANA Change Notices: Not Supported 00:15:26.438 PLE Aggregate Log Change Notices: Not Supported 00:15:26.438 LBA Status Info Alert Notices: Not Supported 00:15:26.438 EGE Aggregate Log Change Notices: Not Supported 00:15:26.438 Normal NVM Subsystem Shutdown event: Not Supported 00:15:26.438 Zone Descriptor Change Notices: Not Supported 00:15:26.438 Discovery Log Change Notices: Not Supported 00:15:26.438 Controller Attributes 00:15:26.438 128-bit Host Identifier: Supported 00:15:26.438 Non-Operational Permissive Mode: Not Supported 00:15:26.438 NVM Sets: Not Supported 00:15:26.438 Read Recovery Levels: Not Supported 00:15:26.438 Endurance Groups: Not Supported 00:15:26.438 Predictable Latency Mode: Not Supported 00:15:26.438 Traffic Based Keep ALive: Not Supported 00:15:26.438 Namespace Granularity: Not Supported 00:15:26.438 SQ Associations: Not Supported 00:15:26.438 UUID List: Not Supported 00:15:26.438 Multi-Domain Subsystem: Not Supported 00:15:26.439 Fixed Capacity Management: Not Supported 00:15:26.439 Variable Capacity Management: Not Supported 00:15:26.439 Delete Endurance Group: Not Supported 00:15:26.439 Delete NVM Set: Not Supported 00:15:26.439 Extended LBA Formats Supported: Not Supported 00:15:26.439 Flexible Data Placement Supported: Not Supported 00:15:26.439 00:15:26.439 Controller Memory Buffer Support 00:15:26.439 ================================ 00:15:26.439 Supported: No 00:15:26.439 00:15:26.439 Persistent Memory Region Support 00:15:26.439 ================================ 00:15:26.439 Supported: No 00:15:26.439 00:15:26.439 Admin Command Set Attributes 00:15:26.439 ============================ 00:15:26.439 Security Send/Receive: Not Supported 00:15:26.439 Format NVM: Not Supported 00:15:26.439 Firmware Activate/Download: Not Supported 00:15:26.439 Namespace Management: Not Supported 00:15:26.439 Device Self-Test: Not Supported 00:15:26.439 Directives: Not Supported 00:15:26.439 NVMe-MI: Not Supported 00:15:26.439 Virtualization Management: Not Supported 00:15:26.439 Doorbell Buffer Config: Not Supported 00:15:26.439 Get LBA Status Capability: Not Supported 00:15:26.439 Command & Feature Lockdown Capability: Not Supported 00:15:26.439 Abort Command Limit: 4 00:15:26.439 Async Event Request Limit: 4 00:15:26.439 Number of Firmware Slots: N/A 00:15:26.439 Firmware Slot 1 Read-Only: N/A 00:15:26.439 Firmware Activation Without Reset: N/A 00:15:26.439 Multiple Update Detection Support: N/A 00:15:26.439 Firmware Update Granularity: No Information Provided 00:15:26.439 Per-Namespace SMART Log: No 00:15:26.439 Asymmetric Namespace Access Log Page: Not Supported 00:15:26.439 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:26.439 Command Effects Log Page: Supported 00:15:26.439 Get Log Page Extended Data: Supported 00:15:26.439 Telemetry Log Pages: Not Supported 00:15:26.439 Persistent Event Log Pages: Not Supported 00:15:26.439 Supported Log Pages Log Page: May Support 00:15:26.439 Commands Supported & Effects Log Page: Not Supported 00:15:26.439 Feature Identifiers & Effects Log Page:May Support 00:15:26.439 NVMe-MI Commands & Effects Log Page: May Support 00:15:26.439 Data Area 4 for Telemetry Log: Not Supported 00:15:26.439 Error Log Page Entries Supported: 128 00:15:26.439 Keep Alive: Supported 00:15:26.439 Keep Alive Granularity: 10000 ms 00:15:26.439 00:15:26.439 NVM Command Set Attributes 00:15:26.439 ========================== 00:15:26.439 Submission Queue Entry Size 00:15:26.439 Max: 64 00:15:26.439 Min: 64 00:15:26.439 Completion Queue Entry Size 00:15:26.439 Max: 16 00:15:26.439 Min: 16 00:15:26.439 Number of Namespaces: 32 00:15:26.439 Compare Command: Supported 00:15:26.439 Write Uncorrectable Command: Not Supported 00:15:26.439 Dataset Management Command: Supported 00:15:26.439 Write Zeroes Command: Supported 00:15:26.439 Set Features Save Field: Not Supported 00:15:26.439 Reservations: Not Supported 00:15:26.439 Timestamp: Not Supported 00:15:26.439 Copy: Supported 00:15:26.439 Volatile Write Cache: Present 00:15:26.439 Atomic Write Unit (Normal): 1 00:15:26.439 Atomic Write Unit (PFail): 1 00:15:26.439 Atomic Compare & Write Unit: 1 00:15:26.439 Fused Compare & Write: Supported 00:15:26.439 Scatter-Gather List 00:15:26.439 SGL Command Set: Supported (Dword aligned) 00:15:26.439 SGL Keyed: Not Supported 00:15:26.439 SGL Bit Bucket Descriptor: Not Supported 00:15:26.439 SGL Metadata Pointer: Not Supported 00:15:26.439 Oversized SGL: Not Supported 00:15:26.439 SGL Metadata Address: Not Supported 00:15:26.439 SGL Offset: Not Supported 00:15:26.439 Transport SGL Data Block: Not Supported 00:15:26.439 Replay Protected Memory Block: Not Supported 00:15:26.439 00:15:26.439 Firmware Slot Information 00:15:26.439 ========================= 00:15:26.439 Active slot: 1 00:15:26.439 Slot 1 Firmware Revision: 24.05.1 00:15:26.439 00:15:26.439 00:15:26.439 Commands Supported and Effects 00:15:26.439 ============================== 00:15:26.439 Admin Commands 00:15:26.439 -------------- 00:15:26.439 Get Log Page (02h): Supported 00:15:26.439 Identify (06h): Supported 00:15:26.439 Abort (08h): Supported 00:15:26.439 Set Features (09h): Supported 00:15:26.439 Get Features (0Ah): Supported 00:15:26.439 Asynchronous Event Request (0Ch): Supported 00:15:26.439 Keep Alive (18h): Supported 00:15:26.439 I/O Commands 00:15:26.439 ------------ 00:15:26.439 Flush (00h): Supported LBA-Change 00:15:26.439 Write (01h): Supported LBA-Change 00:15:26.439 Read (02h): Supported 00:15:26.439 Compare (05h): Supported 00:15:26.439 Write Zeroes (08h): Supported LBA-Change 00:15:26.439 Dataset Management (09h): Supported LBA-Change 00:15:26.439 Copy (19h): Supported LBA-Change 00:15:26.439 Unknown (79h): Supported LBA-Change 00:15:26.439 Unknown (7Ah): Supported 00:15:26.439 00:15:26.439 Error Log 00:15:26.439 ========= 00:15:26.439 00:15:26.439 Arbitration 00:15:26.439 =========== 00:15:26.439 Arbitration Burst: 1 00:15:26.439 00:15:26.439 Power Management 00:15:26.439 ================ 00:15:26.439 Number of Power States: 1 00:15:26.439 Current Power State: Power State #0 00:15:26.439 Power State #0: 00:15:26.439 Max Power: 0.00 W 00:15:26.439 Non-Operational State: Operational 00:15:26.439 Entry Latency: Not Reported 00:15:26.439 Exit Latency: Not Reported 00:15:26.439 Relative Read Throughput: 0 00:15:26.439 Relative Read Latency: 0 00:15:26.439 Relative Write Throughput: 0 00:15:26.439 Relative Write Latency: 0 00:15:26.439 Idle Power: Not Reported 00:15:26.439 Active Power: Not Reported 00:15:26.439 Non-Operational Permissive Mode: Not Supported 00:15:26.439 00:15:26.439 Health Information 00:15:26.439 ================== 00:15:26.439 Critical Warnings: 00:15:26.439 Available Spare Space: OK 00:15:26.439 Temperature: OK 00:15:26.439 Device Reliability: OK 00:15:26.439 Read Only: No 00:15:26.439 Volatile Memory Backup: OK 00:15:26.439 Current Temperature: 0 Kelvin[2024-07-14 13:51:04.306075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:26.439 [2024-07-14 13:51:04.313903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:26.439 [2024-07-14 13:51:04.313961] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:26.439 [2024-07-14 13:51:04.313980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.439 [2024-07-14 13:51:04.313991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.439 [2024-07-14 13:51:04.314001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.439 [2024-07-14 13:51:04.314011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.439 [2024-07-14 13:51:04.314094] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:26.439 [2024-07-14 13:51:04.314122] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:26.439 [2024-07-14 13:51:04.315099] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:26.439 [2024-07-14 13:51:04.315170] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:26.439 [2024-07-14 13:51:04.315185] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:26.439 [2024-07-14 13:51:04.316101] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:26.439 [2024-07-14 13:51:04.316126] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:26.439 [2024-07-14 13:51:04.316192] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:26.439 [2024-07-14 13:51:04.317403] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:26.439 (-273 Celsius) 00:15:26.439 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:26.439 Available Spare: 0% 00:15:26.439 Available Spare Threshold: 0% 00:15:26.439 Life Percentage Used: 0% 00:15:26.439 Data Units Read: 0 00:15:26.439 Data Units Written: 0 00:15:26.439 Host Read Commands: 0 00:15:26.439 Host Write Commands: 0 00:15:26.439 Controller Busy Time: 0 minutes 00:15:26.439 Power Cycles: 0 00:15:26.439 Power On Hours: 0 hours 00:15:26.439 Unsafe Shutdowns: 0 00:15:26.439 Unrecoverable Media Errors: 0 00:15:26.439 Lifetime Error Log Entries: 0 00:15:26.439 Warning Temperature Time: 0 minutes 00:15:26.439 Critical Temperature Time: 0 minutes 00:15:26.439 00:15:26.439 Number of Queues 00:15:26.439 ================ 00:15:26.439 Number of I/O Submission Queues: 127 00:15:26.439 Number of I/O Completion Queues: 127 00:15:26.439 00:15:26.439 Active Namespaces 00:15:26.439 ================= 00:15:26.439 Namespace ID:1 00:15:26.439 Error Recovery Timeout: Unlimited 00:15:26.439 Command Set Identifier: NVM (00h) 00:15:26.439 Deallocate: Supported 00:15:26.439 Deallocated/Unwritten Error: Not Supported 00:15:26.439 Deallocated Read Value: Unknown 00:15:26.439 Deallocate in Write Zeroes: Not Supported 00:15:26.439 Deallocated Guard Field: 0xFFFF 00:15:26.439 Flush: Supported 00:15:26.439 Reservation: Supported 00:15:26.440 Namespace Sharing Capabilities: Multiple Controllers 00:15:26.440 Size (in LBAs): 131072 (0GiB) 00:15:26.440 Capacity (in LBAs): 131072 (0GiB) 00:15:26.440 Utilization (in LBAs): 131072 (0GiB) 00:15:26.440 NGUID: C5F53BF8A3E14A338234470C89E171FC 00:15:26.440 UUID: c5f53bf8-a3e1-4a33-8234-470c89e171fc 00:15:26.440 Thin Provisioning: Not Supported 00:15:26.440 Per-NS Atomic Units: Yes 00:15:26.440 Atomic Boundary Size (Normal): 0 00:15:26.440 Atomic Boundary Size (PFail): 0 00:15:26.440 Atomic Boundary Offset: 0 00:15:26.440 Maximum Single Source Range Length: 65535 00:15:26.440 Maximum Copy Length: 65535 00:15:26.440 Maximum Source Range Count: 1 00:15:26.440 NGUID/EUI64 Never Reused: No 00:15:26.440 Namespace Write Protected: No 00:15:26.440 Number of LBA Formats: 1 00:15:26.440 Current LBA Format: LBA Format #00 00:15:26.440 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:26.440 00:15:26.440 13:51:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:26.440 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.697 [2024-07-14 13:51:04.547972] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.960 Initializing NVMe Controllers 00:15:31.960 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:31.960 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:31.960 Initialization complete. Launching workers. 00:15:31.960 ======================================================== 00:15:31.960 Latency(us) 00:15:31.960 Device Information : IOPS MiB/s Average min max 00:15:31.960 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34892.36 136.30 3667.75 1162.34 8018.18 00:15:31.960 ======================================================== 00:15:31.960 Total : 34892.36 136.30 3667.75 1162.34 8018.18 00:15:31.960 00:15:31.960 [2024-07-14 13:51:09.654227] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.960 13:51:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:31.960 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.960 [2024-07-14 13:51:09.888873] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.221 Initializing NVMe Controllers 00:15:37.221 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:37.221 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:37.221 Initialization complete. Launching workers. 00:15:37.221 ======================================================== 00:15:37.221 Latency(us) 00:15:37.221 Device Information : IOPS MiB/s Average min max 00:15:37.221 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33826.39 132.13 3783.51 1196.00 8293.48 00:15:37.221 ======================================================== 00:15:37.221 Total : 33826.39 132.13 3783.51 1196.00 8293.48 00:15:37.221 00:15:37.221 [2024-07-14 13:51:14.911889] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.221 13:51:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:37.221 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.221 [2024-07-14 13:51:15.127711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:42.480 [2024-07-14 13:51:20.266008] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:42.480 Initializing NVMe Controllers 00:15:42.480 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:42.480 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:42.480 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:42.480 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:42.480 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:42.480 Initialization complete. Launching workers. 00:15:42.480 Starting thread on core 2 00:15:42.480 Starting thread on core 3 00:15:42.480 Starting thread on core 1 00:15:42.480 13:51:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:42.480 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.737 [2024-07-14 13:51:20.568389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:46.951 [2024-07-14 13:51:24.181163] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:46.951 Initializing NVMe Controllers 00:15:46.951 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:46.951 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:46.951 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:46.951 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:46.951 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:46.951 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:46.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:46.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:46.951 Initialization complete. Launching workers. 00:15:46.951 Starting thread on core 1 with urgent priority queue 00:15:46.951 Starting thread on core 2 with urgent priority queue 00:15:46.951 Starting thread on core 3 with urgent priority queue 00:15:46.951 Starting thread on core 0 with urgent priority queue 00:15:46.951 SPDK bdev Controller (SPDK2 ) core 0: 4079.33 IO/s 24.51 secs/100000 ios 00:15:46.951 SPDK bdev Controller (SPDK2 ) core 1: 4047.67 IO/s 24.71 secs/100000 ios 00:15:46.951 SPDK bdev Controller (SPDK2 ) core 2: 4571.00 IO/s 21.88 secs/100000 ios 00:15:46.951 SPDK bdev Controller (SPDK2 ) core 3: 3800.67 IO/s 26.31 secs/100000 ios 00:15:46.951 ======================================================== 00:15:46.951 00:15:46.951 13:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:46.951 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.951 [2024-07-14 13:51:24.491402] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:46.951 Initializing NVMe Controllers 00:15:46.951 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:46.951 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:46.951 Namespace ID: 1 size: 0GB 00:15:46.951 Initialization complete. 00:15:46.951 INFO: using host memory buffer for IO 00:15:46.951 Hello world! 00:15:46.951 [2024-07-14 13:51:24.501464] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:46.951 13:51:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:46.951 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.951 [2024-07-14 13:51:24.779344] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:48.323 Initializing NVMe Controllers 00:15:48.323 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.323 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.323 Initialization complete. Launching workers. 00:15:48.323 submit (in ns) avg, min, max = 7227.6, 3511.1, 4015844.4 00:15:48.323 complete (in ns) avg, min, max = 24850.0, 2061.1, 4015278.9 00:15:48.323 00:15:48.323 Submit histogram 00:15:48.323 ================ 00:15:48.323 Range in us Cumulative Count 00:15:48.323 3.508 - 3.532: 0.1789% ( 24) 00:15:48.323 3.532 - 3.556: 0.5740% ( 53) 00:15:48.323 3.556 - 3.579: 2.3854% ( 243) 00:15:48.323 3.579 - 3.603: 5.9858% ( 483) 00:15:48.323 3.603 - 3.627: 12.4189% ( 863) 00:15:48.323 3.627 - 3.650: 20.7007% ( 1111) 00:15:48.323 3.650 - 3.674: 31.4722% ( 1445) 00:15:48.323 3.674 - 3.698: 40.1640% ( 1166) 00:15:48.323 3.698 - 3.721: 48.4085% ( 1106) 00:15:48.323 3.721 - 3.745: 53.2016% ( 643) 00:15:48.323 3.745 - 3.769: 57.8233% ( 620) 00:15:48.323 3.769 - 3.793: 61.4834% ( 491) 00:15:48.323 3.793 - 3.816: 64.6441% ( 424) 00:15:48.323 3.816 - 3.840: 68.0432% ( 456) 00:15:48.323 3.840 - 3.864: 71.6660% ( 486) 00:15:48.323 3.864 - 3.887: 75.7659% ( 550) 00:15:48.323 3.887 - 3.911: 79.9105% ( 556) 00:15:48.323 3.911 - 3.935: 83.8241% ( 525) 00:15:48.323 3.935 - 3.959: 86.2095% ( 320) 00:15:48.323 3.959 - 3.982: 88.0954% ( 253) 00:15:48.323 3.982 - 4.006: 89.8323% ( 233) 00:15:48.323 4.006 - 4.030: 91.0250% ( 160) 00:15:48.323 4.030 - 4.053: 92.0686% ( 140) 00:15:48.323 4.053 - 4.077: 93.0153% ( 127) 00:15:48.323 4.077 - 4.101: 93.9322% ( 123) 00:15:48.323 4.101 - 4.124: 94.8789% ( 127) 00:15:48.323 4.124 - 4.148: 95.5572% ( 91) 00:15:48.323 4.148 - 4.172: 96.0492% ( 66) 00:15:48.323 4.172 - 4.196: 96.3623% ( 42) 00:15:48.323 4.196 - 4.219: 96.6232% ( 35) 00:15:48.323 4.219 - 4.243: 96.7797% ( 21) 00:15:48.323 4.243 - 4.267: 96.9064% ( 17) 00:15:48.323 4.267 - 4.290: 96.9959% ( 12) 00:15:48.323 4.290 - 4.314: 97.0928% ( 13) 00:15:48.323 4.314 - 4.338: 97.1897% ( 13) 00:15:48.323 4.338 - 4.361: 97.2643% ( 10) 00:15:48.323 4.361 - 4.385: 97.3164% ( 7) 00:15:48.323 4.385 - 4.409: 97.3537% ( 5) 00:15:48.323 4.409 - 4.433: 97.4283% ( 10) 00:15:48.323 4.433 - 4.456: 97.4581% ( 4) 00:15:48.323 4.456 - 4.480: 97.4804% ( 3) 00:15:48.323 4.480 - 4.504: 97.5028% ( 3) 00:15:48.323 4.504 - 4.527: 97.5252% ( 3) 00:15:48.323 4.599 - 4.622: 97.5550% ( 4) 00:15:48.323 4.622 - 4.646: 97.5624% ( 1) 00:15:48.323 4.670 - 4.693: 97.5848% ( 3) 00:15:48.323 4.693 - 4.717: 97.5997% ( 2) 00:15:48.323 4.717 - 4.741: 97.6072% ( 1) 00:15:48.323 4.741 - 4.764: 97.6370% ( 4) 00:15:48.323 4.764 - 4.788: 97.6817% ( 6) 00:15:48.323 4.788 - 4.812: 97.6966% ( 2) 00:15:48.323 4.812 - 4.836: 97.7264% ( 4) 00:15:48.323 4.836 - 4.859: 97.7712% ( 6) 00:15:48.323 4.859 - 4.883: 97.8159% ( 6) 00:15:48.323 4.883 - 4.907: 97.8755% ( 8) 00:15:48.323 4.907 - 4.930: 97.9128% ( 5) 00:15:48.323 4.930 - 4.954: 97.9426% ( 4) 00:15:48.323 4.954 - 4.978: 97.9650% ( 3) 00:15:48.323 4.978 - 5.001: 98.0097% ( 6) 00:15:48.323 5.001 - 5.025: 98.0544% ( 6) 00:15:48.323 5.025 - 5.049: 98.0991% ( 6) 00:15:48.323 5.049 - 5.073: 98.1215% ( 3) 00:15:48.323 5.073 - 5.096: 98.1364% ( 2) 00:15:48.323 5.096 - 5.120: 98.1662% ( 4) 00:15:48.323 5.120 - 5.144: 98.1886% ( 3) 00:15:48.323 5.144 - 5.167: 98.1960% ( 1) 00:15:48.323 5.167 - 5.191: 98.2408% ( 6) 00:15:48.323 5.191 - 5.215: 98.2557% ( 2) 00:15:48.323 5.215 - 5.239: 98.2855% ( 4) 00:15:48.323 5.239 - 5.262: 98.3004% ( 2) 00:15:48.323 5.286 - 5.310: 98.3079% ( 1) 00:15:48.323 5.310 - 5.333: 98.3451% ( 5) 00:15:48.323 5.333 - 5.357: 98.3600% ( 2) 00:15:48.323 5.381 - 5.404: 98.3750% ( 2) 00:15:48.323 5.452 - 5.476: 98.3824% ( 1) 00:15:48.323 5.476 - 5.499: 98.3899% ( 1) 00:15:48.323 5.594 - 5.618: 98.4048% ( 2) 00:15:48.323 5.902 - 5.926: 98.4122% ( 1) 00:15:48.323 5.973 - 5.997: 98.4197% ( 1) 00:15:48.323 5.997 - 6.021: 98.4271% ( 1) 00:15:48.323 6.044 - 6.068: 98.4346% ( 1) 00:15:48.323 6.068 - 6.116: 98.4495% ( 2) 00:15:48.323 6.163 - 6.210: 98.4644% ( 2) 00:15:48.323 6.210 - 6.258: 98.4719% ( 1) 00:15:48.323 6.258 - 6.305: 98.4868% ( 2) 00:15:48.323 6.542 - 6.590: 98.5017% ( 2) 00:15:48.323 6.732 - 6.779: 98.5240% ( 3) 00:15:48.323 6.827 - 6.874: 98.5315% ( 1) 00:15:48.323 6.874 - 6.921: 98.5389% ( 1) 00:15:48.323 6.921 - 6.969: 98.5464% ( 1) 00:15:48.323 7.016 - 7.064: 98.5613% ( 2) 00:15:48.323 7.111 - 7.159: 98.5688% ( 1) 00:15:48.323 7.301 - 7.348: 98.5762% ( 1) 00:15:48.323 7.348 - 7.396: 98.5837% ( 1) 00:15:48.323 7.680 - 7.727: 98.6060% ( 3) 00:15:48.323 7.775 - 7.822: 98.6135% ( 1) 00:15:48.323 7.870 - 7.917: 98.6284% ( 2) 00:15:48.323 8.012 - 8.059: 98.6433% ( 2) 00:15:48.323 8.059 - 8.107: 98.6582% ( 2) 00:15:48.323 8.107 - 8.154: 98.6657% ( 1) 00:15:48.323 8.154 - 8.201: 98.6731% ( 1) 00:15:48.323 8.201 - 8.249: 98.6806% ( 1) 00:15:48.323 8.344 - 8.391: 98.6880% ( 1) 00:15:48.323 8.391 - 8.439: 98.7029% ( 2) 00:15:48.323 8.439 - 8.486: 98.7253% ( 3) 00:15:48.323 8.581 - 8.628: 98.7328% ( 1) 00:15:48.323 8.628 - 8.676: 98.7477% ( 2) 00:15:48.323 8.676 - 8.723: 98.7551% ( 1) 00:15:48.323 8.960 - 9.007: 98.7626% ( 1) 00:15:48.323 9.007 - 9.055: 98.7700% ( 1) 00:15:48.323 9.339 - 9.387: 98.7775% ( 1) 00:15:48.323 9.387 - 9.434: 98.7849% ( 1) 00:15:48.323 9.719 - 9.766: 98.7924% ( 1) 00:15:48.323 9.813 - 9.861: 98.8073% ( 2) 00:15:48.323 9.861 - 9.908: 98.8148% ( 1) 00:15:48.323 9.956 - 10.003: 98.8222% ( 1) 00:15:48.323 10.382 - 10.430: 98.8297% ( 1) 00:15:48.323 10.524 - 10.572: 98.8371% ( 1) 00:15:48.323 10.667 - 10.714: 98.8446% ( 1) 00:15:48.323 11.188 - 11.236: 98.8520% ( 1) 00:15:48.323 11.520 - 11.567: 98.8669% ( 2) 00:15:48.323 11.947 - 11.994: 98.8744% ( 1) 00:15:48.323 11.994 - 12.041: 98.8818% ( 1) 00:15:48.323 12.041 - 12.089: 98.8893% ( 1) 00:15:48.323 12.136 - 12.231: 98.8968% ( 1) 00:15:48.323 12.421 - 12.516: 98.9117% ( 2) 00:15:48.323 12.705 - 12.800: 98.9191% ( 1) 00:15:48.323 12.800 - 12.895: 98.9266% ( 1) 00:15:48.323 12.895 - 12.990: 98.9340% ( 1) 00:15:48.323 12.990 - 13.084: 98.9489% ( 2) 00:15:48.323 13.084 - 13.179: 98.9638% ( 2) 00:15:48.323 13.653 - 13.748: 98.9713% ( 1) 00:15:48.323 14.127 - 14.222: 98.9862% ( 2) 00:15:48.323 14.412 - 14.507: 98.9937% ( 1) 00:15:48.323 14.696 - 14.791: 99.0086% ( 2) 00:15:48.323 14.791 - 14.886: 99.0160% ( 1) 00:15:48.323 14.981 - 15.076: 99.0235% ( 1) 00:15:48.323 15.076 - 15.170: 99.0309% ( 1) 00:15:48.323 15.360 - 15.455: 99.0384% ( 1) 00:15:48.323 17.161 - 17.256: 99.0458% ( 1) 00:15:48.323 17.256 - 17.351: 99.0682% ( 3) 00:15:48.323 17.351 - 17.446: 99.0980% ( 4) 00:15:48.323 17.446 - 17.541: 99.1055% ( 1) 00:15:48.323 17.541 - 17.636: 99.1353% ( 4) 00:15:48.323 17.636 - 17.730: 99.1800% ( 6) 00:15:48.323 17.730 - 17.825: 99.2397% ( 8) 00:15:48.323 17.825 - 17.920: 99.2918% ( 7) 00:15:48.323 17.920 - 18.015: 99.3366% ( 6) 00:15:48.323 18.015 - 18.110: 99.3738% ( 5) 00:15:48.323 18.110 - 18.204: 99.4633% ( 12) 00:15:48.323 18.204 - 18.299: 99.5155% ( 7) 00:15:48.323 18.299 - 18.394: 99.5975% ( 11) 00:15:48.323 18.394 - 18.489: 99.6347% ( 5) 00:15:48.323 18.489 - 18.584: 99.6646% ( 4) 00:15:48.323 18.584 - 18.679: 99.7093% ( 6) 00:15:48.323 18.679 - 18.773: 99.7466% ( 5) 00:15:48.323 18.773 - 18.868: 99.7764% ( 4) 00:15:48.323 18.868 - 18.963: 99.8062% ( 4) 00:15:48.323 18.963 - 19.058: 99.8211% ( 2) 00:15:48.323 19.058 - 19.153: 99.8286% ( 1) 00:15:48.323 19.153 - 19.247: 99.8435% ( 2) 00:15:48.323 19.247 - 19.342: 99.8584% ( 2) 00:15:48.323 19.342 - 19.437: 99.8658% ( 1) 00:15:48.323 19.437 - 19.532: 99.8882% ( 3) 00:15:48.323 19.721 - 19.816: 99.8956% ( 1) 00:15:48.323 20.480 - 20.575: 99.9031% ( 1) 00:15:48.323 21.144 - 21.239: 99.9105% ( 1) 00:15:48.323 22.756 - 22.850: 99.9180% ( 1) 00:15:48.323 3980.705 - 4004.978: 99.9776% ( 8) 00:15:48.323 4004.978 - 4029.250: 100.0000% ( 3) 00:15:48.323 00:15:48.323 Complete histogram 00:15:48.323 ================== 00:15:48.323 Range in us Cumulative Count 00:15:48.323 2.050 - 2.062: 0.0149% ( 2) 00:15:48.323 2.062 - 2.074: 16.0790% ( 2155) 00:15:48.323 2.074 - 2.086: 35.5870% ( 2617) 00:15:48.323 2.086 - 2.098: 37.6146% ( 272) 00:15:48.324 2.098 - 2.110: 50.6373% ( 1747) 00:15:48.324 2.110 - 2.121: 57.8979% ( 974) 00:15:48.324 2.121 - 2.133: 59.4633% ( 210) 00:15:48.324 2.133 - 2.145: 68.7887% ( 1251) 00:15:48.324 2.145 - 2.157: 72.6277% ( 515) 00:15:48.324 2.157 - 2.169: 74.0142% ( 186) 00:15:48.324 2.169 - 2.181: 78.9713% ( 665) 00:15:48.324 2.181 - 2.193: 80.9169% ( 261) 00:15:48.324 2.193 - 2.204: 81.6847% ( 103) 00:15:48.324 2.204 - 2.216: 85.6728% ( 535) 00:15:48.324 2.216 - 2.228: 88.2743% ( 349) 00:15:48.324 2.228 - 2.240: 89.9292% ( 222) 00:15:48.324 2.240 - 2.252: 92.5680% ( 354) 00:15:48.324 2.252 - 2.264: 93.5296% ( 129) 00:15:48.324 2.264 - 2.276: 93.8576% ( 44) 00:15:48.324 2.276 - 2.287: 94.2303% ( 50) 00:15:48.324 2.287 - 2.299: 94.6925% ( 62) 00:15:48.324 2.299 - 2.311: 95.2218% ( 71) 00:15:48.324 2.311 - 2.323: 95.4081% ( 25) 00:15:48.324 2.323 - 2.335: 95.4752% ( 9) 00:15:48.324 2.335 - 2.347: 95.5348% ( 8) 00:15:48.324 2.347 - 2.359: 95.7063% ( 23) 00:15:48.324 2.359 - 2.370: 96.0492% ( 46) 00:15:48.324 2.370 - 2.382: 96.4816% ( 58) 00:15:48.324 2.382 - 2.394: 96.8915% ( 55) 00:15:48.324 2.394 - 2.406: 97.1226% ( 31) 00:15:48.324 2.406 - 2.418: 97.2643% ( 19) 00:15:48.324 2.418 - 2.430: 97.4059% ( 19) 00:15:48.324 2.430 - 2.441: 97.5177% ( 15) 00:15:48.324 2.441 - 2.453: 97.6370% ( 16) 00:15:48.324 2.453 - 2.465: 97.7562% ( 16) 00:15:48.324 2.465 - 2.477: 97.8904% ( 18) 00:15:48.324 2.477 - 2.489: 98.0022% ( 15) 00:15:48.324 2.489 - 2.501: 98.0619% ( 8) 00:15:48.324 2.501 - 2.513: 98.0768% ( 2) 00:15:48.324 2.513 - 2.524: 98.0917% ( 2) 00:15:48.324 2.524 - 2.536: 98.1588% ( 9) 00:15:48.324 2.536 - 2.548: 98.1886% ( 4) 00:15:48.324 2.548 - 2.560: 98.2184% ( 4) 00:15:48.324 2.560 - 2.572: 98.2259% ( 1) 00:15:48.324 2.572 - 2.584: 98.2408% ( 2) 00:15:48.324 2.584 - 2.596: 98.2557% ( 2) 00:15:48.324 2.596 - 2.607: 98.2780% ( 3) 00:15:48.324 2.607 - 2.619: 98.2930% ( 2) 00:15:48.324 2.619 - 2.631: 98.3377% ( 6) 00:15:48.324 2.631 - 2.643: 98.3526% ( 2) 00:15:48.324 2.667 - 2.679: 98.3824% ( 4) 00:15:48.324 2.679 - 2.690: 98.3973% ( 2) 00:15:48.324 2.702 - 2.714: 98.4122% ( 2) 00:15:48.324 2.714 - 2.726: 98.4197% ( 1) 00:15:48.324 2.750 - 2.761: 98.4271% ( 1) 00:15:48.324 2.761 - 2.773: 98.4346% ( 1) 00:15:48.324 2.856 - 2.868: 98.4420% ( 1) 00:15:48.324 2.939 - 2.951: 98.4495% ( 1) 00:15:48.324 3.034 - 3.058: 98.4570% ( 1) 00:15:48.324 3.319 - 3.342: 98.4644% ( 1) 00:15:48.324 3.366 - 3.390: 98.4719% ( 1) 00:15:48.324 3.390 - 3.413: 98.4793% ( 1) 00:15:48.324 3.413 - 3.437: 98.4942% ( 2) 00:15:48.324 3.437 - 3.461: 98.5017% ( 1) 00:15:48.324 3.461 - 3.484: 98.5166% ( 2) 00:15:48.324 3.484 - 3.508: 98.5315% ( 2) 00:15:48.324 3.508 - 3.532: 98.5389% ( 1) 00:15:48.324 3.532 - 3.556: 98.5688% ( 4) 00:15:48.324 3.556 - 3.579: 98.5837% ( 2) 00:15:48.324 3.627 - 3.650: 98.5986% ( 2) 00:15:48.324 3.674 - 3.698: 98.6060% ( 1) 00:15:48.324 3.698 - 3.721: 98.6135% ( 1) 00:15:48.324 3.745 - 3.769: 98.6359% ( 3) 00:15:48.324 3.864 - 3.887: 98.6657% ( 4) 00:15:48.324 3.887 - 3.911: 98.6806% ( 2) 00:15:48.324 3.911 - 3.935: 98.6880% ( 1) 00:15:48.324 3.935 - 3.959: 98.6955% ( 1) 00:15:48.324 3.959 - 3.982: 98.7029% ( 1) 00:15:48.324 4.124 - 4.148: 98.7104% ( 1) 00:15:48.324 4.480 - 4.504: 98.7179% ( 1) 00:15:48.324 4.907 - 4.930: 98.7253% ( 1) 00:15:48.324 5.215 - 5.239: 9[2024-07-14 13:51:25.884712] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:48.324 8.7328% ( 1) 00:15:48.324 5.239 - 5.262: 98.7402% ( 1) 00:15:48.324 5.357 - 5.381: 98.7477% ( 1) 00:15:48.324 5.428 - 5.452: 98.7551% ( 1) 00:15:48.324 5.547 - 5.570: 98.7700% ( 2) 00:15:48.324 5.665 - 5.689: 98.7775% ( 1) 00:15:48.324 5.973 - 5.997: 98.7849% ( 1) 00:15:48.324 6.116 - 6.163: 98.7924% ( 1) 00:15:48.324 6.163 - 6.210: 98.7999% ( 1) 00:15:48.324 6.305 - 6.353: 98.8148% ( 2) 00:15:48.324 6.353 - 6.400: 98.8297% ( 2) 00:15:48.324 6.400 - 6.447: 98.8371% ( 1) 00:15:48.324 6.495 - 6.542: 98.8446% ( 1) 00:15:48.324 6.542 - 6.590: 98.8520% ( 1) 00:15:48.324 6.684 - 6.732: 98.8595% ( 1) 00:15:48.324 6.779 - 6.827: 98.8669% ( 1) 00:15:48.324 7.064 - 7.111: 98.8744% ( 1) 00:15:48.324 7.111 - 7.159: 98.8818% ( 1) 00:15:48.324 7.253 - 7.301: 98.8893% ( 1) 00:15:48.324 15.455 - 15.550: 98.8968% ( 1) 00:15:48.324 15.550 - 15.644: 98.9042% ( 1) 00:15:48.324 15.644 - 15.739: 98.9117% ( 1) 00:15:48.324 15.739 - 15.834: 98.9489% ( 5) 00:15:48.324 15.929 - 16.024: 98.9788% ( 4) 00:15:48.324 16.024 - 16.119: 99.0011% ( 3) 00:15:48.324 16.119 - 16.213: 99.0086% ( 1) 00:15:48.324 16.213 - 16.308: 99.0458% ( 5) 00:15:48.324 16.308 - 16.403: 99.0533% ( 1) 00:15:48.324 16.403 - 16.498: 99.0906% ( 5) 00:15:48.324 16.498 - 16.593: 99.1353% ( 6) 00:15:48.324 16.593 - 16.687: 99.1949% ( 8) 00:15:48.324 16.687 - 16.782: 99.2098% ( 2) 00:15:48.324 16.782 - 16.877: 99.2620% ( 7) 00:15:48.324 16.877 - 16.972: 99.2993% ( 5) 00:15:48.324 16.972 - 17.067: 99.3142% ( 2) 00:15:48.324 17.067 - 17.161: 99.3440% ( 4) 00:15:48.324 17.161 - 17.256: 99.3589% ( 2) 00:15:48.324 17.351 - 17.446: 99.3664% ( 1) 00:15:48.324 17.446 - 17.541: 99.3738% ( 1) 00:15:48.324 17.541 - 17.636: 99.3887% ( 2) 00:15:48.324 17.825 - 17.920: 99.3962% ( 1) 00:15:48.324 18.015 - 18.110: 99.4037% ( 1) 00:15:48.324 18.110 - 18.204: 99.4111% ( 1) 00:15:48.324 18.394 - 18.489: 99.4186% ( 1) 00:15:48.324 31.099 - 31.289: 99.4260% ( 1) 00:15:48.324 86.850 - 87.230: 99.4335% ( 1) 00:15:48.324 3543.799 - 3568.071: 99.4409% ( 1) 00:15:48.324 3980.705 - 4004.978: 99.9478% ( 68) 00:15:48.324 4004.978 - 4029.250: 100.0000% ( 7) 00:15:48.324 00:15:48.324 13:51:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:48.324 13:51:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:48.324 13:51:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:48.324 13:51:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:48.324 13:51:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:48.324 [ 00:15:48.324 { 00:15:48.324 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:48.324 "subtype": "Discovery", 00:15:48.324 "listen_addresses": [], 00:15:48.324 "allow_any_host": true, 00:15:48.324 "hosts": [] 00:15:48.324 }, 00:15:48.324 { 00:15:48.324 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:48.324 "subtype": "NVMe", 00:15:48.324 "listen_addresses": [ 00:15:48.324 { 00:15:48.324 "trtype": "VFIOUSER", 00:15:48.324 "adrfam": "IPv4", 00:15:48.324 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:48.324 "trsvcid": "0" 00:15:48.324 } 00:15:48.324 ], 00:15:48.324 "allow_any_host": true, 00:15:48.324 "hosts": [], 00:15:48.324 "serial_number": "SPDK1", 00:15:48.324 "model_number": "SPDK bdev Controller", 00:15:48.324 "max_namespaces": 32, 00:15:48.324 "min_cntlid": 1, 00:15:48.324 "max_cntlid": 65519, 00:15:48.324 "namespaces": [ 00:15:48.324 { 00:15:48.324 "nsid": 1, 00:15:48.324 "bdev_name": "Malloc1", 00:15:48.324 "name": "Malloc1", 00:15:48.324 "nguid": "2D2CF0532364490B8CA02A82C181FA5D", 00:15:48.324 "uuid": "2d2cf053-2364-490b-8ca0-2a82c181fa5d" 00:15:48.324 }, 00:15:48.324 { 00:15:48.324 "nsid": 2, 00:15:48.324 "bdev_name": "Malloc3", 00:15:48.324 "name": "Malloc3", 00:15:48.324 "nguid": "704D55BCD7214221A922168B5298015A", 00:15:48.324 "uuid": "704d55bc-d721-4221-a922-168b5298015a" 00:15:48.324 } 00:15:48.324 ] 00:15:48.324 }, 00:15:48.324 { 00:15:48.324 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:48.324 "subtype": "NVMe", 00:15:48.324 "listen_addresses": [ 00:15:48.324 { 00:15:48.324 "trtype": "VFIOUSER", 00:15:48.324 "adrfam": "IPv4", 00:15:48.324 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:48.324 "trsvcid": "0" 00:15:48.324 } 00:15:48.324 ], 00:15:48.324 "allow_any_host": true, 00:15:48.324 "hosts": [], 00:15:48.324 "serial_number": "SPDK2", 00:15:48.324 "model_number": "SPDK bdev Controller", 00:15:48.324 "max_namespaces": 32, 00:15:48.324 "min_cntlid": 1, 00:15:48.324 "max_cntlid": 65519, 00:15:48.324 "namespaces": [ 00:15:48.324 { 00:15:48.324 "nsid": 1, 00:15:48.324 "bdev_name": "Malloc2", 00:15:48.324 "name": "Malloc2", 00:15:48.324 "nguid": "C5F53BF8A3E14A338234470C89E171FC", 00:15:48.324 "uuid": "c5f53bf8-a3e1-4a33-8234-470c89e171fc" 00:15:48.324 } 00:15:48.324 ] 00:15:48.324 } 00:15:48.324 ] 00:15:48.324 13:51:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:48.325 13:51:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1422023 00:15:48.325 13:51:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:48.325 13:51:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:48.325 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:48.325 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:48.325 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:48.325 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:48.325 13:51:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:48.325 13:51:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:48.325 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.583 [2024-07-14 13:51:26.327994] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:48.583 Malloc4 00:15:48.583 13:51:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:48.841 [2024-07-14 13:51:26.707746] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:48.841 13:51:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:48.841 Asynchronous Event Request test 00:15:48.841 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.841 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.841 Registering asynchronous event callbacks... 00:15:48.841 Starting namespace attribute notice tests for all controllers... 00:15:48.841 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:48.841 aer_cb - Changed Namespace 00:15:48.841 Cleaning up... 00:15:49.099 [ 00:15:49.099 { 00:15:49.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:49.099 "subtype": "Discovery", 00:15:49.099 "listen_addresses": [], 00:15:49.099 "allow_any_host": true, 00:15:49.099 "hosts": [] 00:15:49.099 }, 00:15:49.099 { 00:15:49.099 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:49.099 "subtype": "NVMe", 00:15:49.099 "listen_addresses": [ 00:15:49.099 { 00:15:49.099 "trtype": "VFIOUSER", 00:15:49.099 "adrfam": "IPv4", 00:15:49.099 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:49.099 "trsvcid": "0" 00:15:49.099 } 00:15:49.099 ], 00:15:49.099 "allow_any_host": true, 00:15:49.099 "hosts": [], 00:15:49.099 "serial_number": "SPDK1", 00:15:49.099 "model_number": "SPDK bdev Controller", 00:15:49.099 "max_namespaces": 32, 00:15:49.099 "min_cntlid": 1, 00:15:49.099 "max_cntlid": 65519, 00:15:49.099 "namespaces": [ 00:15:49.099 { 00:15:49.099 "nsid": 1, 00:15:49.099 "bdev_name": "Malloc1", 00:15:49.099 "name": "Malloc1", 00:15:49.099 "nguid": "2D2CF0532364490B8CA02A82C181FA5D", 00:15:49.099 "uuid": "2d2cf053-2364-490b-8ca0-2a82c181fa5d" 00:15:49.099 }, 00:15:49.099 { 00:15:49.099 "nsid": 2, 00:15:49.099 "bdev_name": "Malloc3", 00:15:49.099 "name": "Malloc3", 00:15:49.099 "nguid": "704D55BCD7214221A922168B5298015A", 00:15:49.099 "uuid": "704d55bc-d721-4221-a922-168b5298015a" 00:15:49.099 } 00:15:49.099 ] 00:15:49.099 }, 00:15:49.099 { 00:15:49.099 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:49.099 "subtype": "NVMe", 00:15:49.100 "listen_addresses": [ 00:15:49.100 { 00:15:49.100 "trtype": "VFIOUSER", 00:15:49.100 "adrfam": "IPv4", 00:15:49.100 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:49.100 "trsvcid": "0" 00:15:49.100 } 00:15:49.100 ], 00:15:49.100 "allow_any_host": true, 00:15:49.100 "hosts": [], 00:15:49.100 "serial_number": "SPDK2", 00:15:49.100 "model_number": "SPDK bdev Controller", 00:15:49.100 "max_namespaces": 32, 00:15:49.100 "min_cntlid": 1, 00:15:49.100 "max_cntlid": 65519, 00:15:49.100 "namespaces": [ 00:15:49.100 { 00:15:49.100 "nsid": 1, 00:15:49.100 "bdev_name": "Malloc2", 00:15:49.100 "name": "Malloc2", 00:15:49.100 "nguid": "C5F53BF8A3E14A338234470C89E171FC", 00:15:49.100 "uuid": "c5f53bf8-a3e1-4a33-8234-470c89e171fc" 00:15:49.100 }, 00:15:49.100 { 00:15:49.100 "nsid": 2, 00:15:49.100 "bdev_name": "Malloc4", 00:15:49.100 "name": "Malloc4", 00:15:49.100 "nguid": "BFEDA50C147F44E9A1A803F619B956B0", 00:15:49.100 "uuid": "bfeda50c-147f-44e9-a1a8-03f619b956b0" 00:15:49.100 } 00:15:49.100 ] 00:15:49.100 } 00:15:49.100 ] 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1422023 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1415810 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1415810 ']' 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1415810 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1415810 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1415810' 00:15:49.100 killing process with pid 1415810 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1415810 00:15:49.100 13:51:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1415810 00:15:49.666 13:51:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:49.666 13:51:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:49.666 13:51:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:49.666 13:51:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:49.666 13:51:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:49.666 13:51:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1422163 00:15:49.666 13:51:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1422163' 00:15:49.666 Process pid: 1422163 00:15:49.667 13:51:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:49.667 13:51:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:49.667 13:51:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1422163 00:15:49.667 13:51:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1422163 ']' 00:15:49.667 13:51:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.667 13:51:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:49.667 13:51:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.667 13:51:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:49.667 13:51:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:49.667 [2024-07-14 13:51:27.388016] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:49.667 [2024-07-14 13:51:27.389060] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:49.667 [2024-07-14 13:51:27.389115] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.667 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.667 [2024-07-14 13:51:27.453966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.667 [2024-07-14 13:51:27.546216] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.667 [2024-07-14 13:51:27.546272] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.667 [2024-07-14 13:51:27.546289] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.667 [2024-07-14 13:51:27.546303] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.667 [2024-07-14 13:51:27.546314] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.667 [2024-07-14 13:51:27.546375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.667 [2024-07-14 13:51:27.546428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.667 [2024-07-14 13:51:27.546545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.667 [2024-07-14 13:51:27.546547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.925 [2024-07-14 13:51:27.657428] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:49.925 [2024-07-14 13:51:27.657698] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:49.925 [2024-07-14 13:51:27.658015] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:49.925 [2024-07-14 13:51:27.658639] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:49.925 [2024-07-14 13:51:27.658904] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:49.925 13:51:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:49.925 13:51:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:49.925 13:51:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:50.858 13:51:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:51.114 13:51:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:51.114 13:51:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:51.114 13:51:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:51.114 13:51:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:51.114 13:51:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:51.371 Malloc1 00:15:51.371 13:51:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:51.629 13:51:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:51.886 13:51:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:52.450 13:51:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:52.450 13:51:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:52.450 13:51:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:52.450 Malloc2 00:15:52.450 13:51:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:52.706 13:51:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:52.963 13:51:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1422163 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1422163 ']' 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1422163 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1422163 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1422163' 00:15:53.221 killing process with pid 1422163 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1422163 00:15:53.221 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1422163 00:15:53.479 13:51:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:53.479 13:51:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:53.479 00:15:53.479 real 0m53.167s 00:15:53.479 user 3m29.800s 00:15:53.479 sys 0m4.285s 00:15:53.479 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.479 13:51:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:53.479 ************************************ 00:15:53.479 END TEST nvmf_vfio_user 00:15:53.479 ************************************ 00:15:53.737 13:51:31 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:53.737 13:51:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:53.737 13:51:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.737 13:51:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.737 ************************************ 00:15:53.737 START TEST nvmf_vfio_user_nvme_compliance 00:15:53.737 ************************************ 00:15:53.737 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:53.737 * Looking for test storage... 00:15:53.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:53.737 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.737 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:53.737 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.737 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.737 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.737 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1422760 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1422760' 00:15:53.738 Process pid: 1422760 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1422760 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 1422760 ']' 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:53.738 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.738 [2024-07-14 13:51:31.615230] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:53.738 [2024-07-14 13:51:31.615318] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.738 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.738 [2024-07-14 13:51:31.675199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:53.996 [2024-07-14 13:51:31.761371] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.996 [2024-07-14 13:51:31.761424] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.996 [2024-07-14 13:51:31.761438] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.996 [2024-07-14 13:51:31.761450] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.996 [2024-07-14 13:51:31.761460] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.996 [2024-07-14 13:51:31.761525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.996 [2024-07-14 13:51:31.761551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.996 [2024-07-14 13:51:31.761554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.996 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:53.996 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:53.996 13:51:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:54.930 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:54.930 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:54.930 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:54.930 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.930 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:54.930 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.930 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:54.930 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:54.930 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.930 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:55.188 malloc0 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.189 13:51:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:55.189 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.189 00:15:55.189 00:15:55.189 CUnit - A unit testing framework for C - Version 2.1-3 00:15:55.189 http://cunit.sourceforge.net/ 00:15:55.189 00:15:55.189 00:15:55.189 Suite: nvme_compliance 00:15:55.189 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-14 13:51:33.112866] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.189 [2024-07-14 13:51:33.114373] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:55.189 [2024-07-14 13:51:33.114397] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:55.189 [2024-07-14 13:51:33.114426] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:55.189 [2024-07-14 13:51:33.115900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.189 passed 00:15:55.460 Test: admin_identify_ctrlr_verify_fused ...[2024-07-14 13:51:33.204511] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.460 [2024-07-14 13:51:33.207529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.460 passed 00:15:55.460 Test: admin_identify_ns ...[2024-07-14 13:51:33.294425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.460 [2024-07-14 13:51:33.353892] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:55.460 [2024-07-14 13:51:33.361893] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:55.460 [2024-07-14 13:51:33.382021] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.460 passed 00:15:55.726 Test: admin_get_features_mandatory_features ...[2024-07-14 13:51:33.467340] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.726 [2024-07-14 13:51:33.470359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.726 passed 00:15:55.726 Test: admin_get_features_optional_features ...[2024-07-14 13:51:33.557900] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.726 [2024-07-14 13:51:33.560930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.726 passed 00:15:55.726 Test: admin_set_features_number_of_queues ...[2024-07-14 13:51:33.647637] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.983 [2024-07-14 13:51:33.751996] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.983 passed 00:15:55.983 Test: admin_get_log_page_mandatory_logs ...[2024-07-14 13:51:33.837363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.983 [2024-07-14 13:51:33.840389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.983 passed 00:15:55.983 Test: admin_get_log_page_with_lpo ...[2024-07-14 13:51:33.925531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.240 [2024-07-14 13:51:33.993896] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:56.240 [2024-07-14 13:51:34.006986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.240 passed 00:15:56.240 Test: fabric_property_get ...[2024-07-14 13:51:34.091328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.240 [2024-07-14 13:51:34.092589] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:56.240 [2024-07-14 13:51:34.094349] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.240 passed 00:15:56.240 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-14 13:51:34.180904] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.240 [2024-07-14 13:51:34.182218] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:56.240 [2024-07-14 13:51:34.183940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.240 passed 00:15:56.497 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-14 13:51:34.270398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.497 [2024-07-14 13:51:34.353900] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:56.497 [2024-07-14 13:51:34.369900] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:56.497 [2024-07-14 13:51:34.375023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.497 passed 00:15:56.497 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-14 13:51:34.457153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.497 [2024-07-14 13:51:34.458458] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:56.497 [2024-07-14 13:51:34.460173] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.755 passed 00:15:56.755 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-14 13:51:34.548324] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.755 [2024-07-14 13:51:34.623884] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:56.755 [2024-07-14 13:51:34.647888] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:56.755 [2024-07-14 13:51:34.652990] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.755 passed 00:15:56.755 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-14 13:51:34.734815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.755 [2024-07-14 13:51:34.736091] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:56.755 [2024-07-14 13:51:34.736130] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:57.012 [2024-07-14 13:51:34.737835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.012 passed 00:15:57.012 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-14 13:51:34.824366] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.012 [2024-07-14 13:51:34.915889] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:57.012 [2024-07-14 13:51:34.923900] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:57.012 [2024-07-14 13:51:34.931885] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:57.012 [2024-07-14 13:51:34.939888] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:57.012 [2024-07-14 13:51:34.968996] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.269 passed 00:15:57.269 Test: admin_create_io_sq_verify_pc ...[2024-07-14 13:51:35.052548] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.269 [2024-07-14 13:51:35.068900] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:57.269 [2024-07-14 13:51:35.086821] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.269 passed 00:15:57.269 Test: admin_create_io_qp_max_qps ...[2024-07-14 13:51:35.171417] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.641 [2024-07-14 13:51:36.279891] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:58.899 [2024-07-14 13:51:36.666621] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.899 passed 00:15:58.899 Test: admin_create_io_sq_shared_cq ...[2024-07-14 13:51:36.748423] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.156 [2024-07-14 13:51:36.880889] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:59.156 [2024-07-14 13:51:36.917971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.156 passed 00:15:59.156 00:15:59.156 Run Summary: Type Total Ran Passed Failed Inactive 00:15:59.156 suites 1 1 n/a 0 0 00:15:59.156 tests 18 18 18 0 0 00:15:59.156 asserts 360 360 360 0 n/a 00:15:59.156 00:15:59.156 Elapsed time = 1.582 seconds 00:15:59.156 13:51:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1422760 00:15:59.156 13:51:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 1422760 ']' 00:15:59.156 13:51:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 1422760 00:15:59.156 13:51:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:59.156 13:51:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:59.156 13:51:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1422760 00:15:59.156 13:51:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:59.156 13:51:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:59.156 13:51:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1422760' 00:15:59.156 killing process with pid 1422760 00:15:59.156 13:51:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 1422760 00:15:59.156 13:51:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 1422760 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:59.414 00:15:59.414 real 0m5.757s 00:15:59.414 user 0m16.245s 00:15:59.414 sys 0m0.526s 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:59.414 ************************************ 00:15:59.414 END TEST nvmf_vfio_user_nvme_compliance 00:15:59.414 ************************************ 00:15:59.414 13:51:37 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:59.414 13:51:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:59.414 13:51:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:59.414 13:51:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:59.414 ************************************ 00:15:59.414 START TEST nvmf_vfio_user_fuzz 00:15:59.414 ************************************ 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:59.414 * Looking for test storage... 00:15:59.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.414 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1423483 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1423483' 00:15:59.415 Process pid: 1423483 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1423483 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1423483 ']' 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:59.415 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:59.981 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:59.981 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:59.981 13:51:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.914 malloc0 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:00.914 13:51:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:33.027 Fuzzing completed. Shutting down the fuzz application 00:16:33.027 00:16:33.027 Dumping successful admin opcodes: 00:16:33.027 8, 9, 10, 24, 00:16:33.027 Dumping successful io opcodes: 00:16:33.027 0, 00:16:33.027 NS: 0x200003a1ef00 I/O qp, Total commands completed: 640756, total successful commands: 2485, random_seed: 2614089536 00:16:33.027 NS: 0x200003a1ef00 admin qp, Total commands completed: 156593, total successful commands: 1262, random_seed: 1320063680 00:16:33.027 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:33.027 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.027 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.027 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1423483 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1423483 ']' 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 1423483 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1423483 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1423483' 00:16:33.028 killing process with pid 1423483 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 1423483 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 1423483 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:33.028 00:16:33.028 real 0m32.393s 00:16:33.028 user 0m33.121s 00:16:33.028 sys 0m25.494s 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:33.028 13:52:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.028 ************************************ 00:16:33.028 END TEST nvmf_vfio_user_fuzz 00:16:33.028 ************************************ 00:16:33.028 13:52:09 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:33.028 13:52:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:33.028 13:52:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:33.028 13:52:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:33.028 ************************************ 00:16:33.028 START TEST nvmf_host_management 00:16:33.028 ************************************ 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:33.028 * Looking for test storage... 00:16:33.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:33.028 13:52:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:33.965 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:33.965 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:33.965 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:33.965 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:33.965 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:33.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:16:33.966 00:16:33.966 --- 10.0.0.2 ping statistics --- 00:16:33.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.966 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:16:33.966 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:16:34.224 00:16:34.224 --- 10.0.0.1 ping statistics --- 00:16:34.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.224 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1428808 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1428808 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1428808 ']' 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:34.224 13:52:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:34.224 [2024-07-14 13:52:12.017847] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:34.224 [2024-07-14 13:52:12.017952] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.224 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.224 [2024-07-14 13:52:12.088205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.224 [2024-07-14 13:52:12.178441] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.224 [2024-07-14 13:52:12.178507] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.224 [2024-07-14 13:52:12.178536] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.224 [2024-07-14 13:52:12.178547] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.224 [2024-07-14 13:52:12.178557] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.224 [2024-07-14 13:52:12.178609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.224 [2024-07-14 13:52:12.178664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.224 [2024-07-14 13:52:12.178730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:34.224 [2024-07-14 13:52:12.178732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:34.483 [2024-07-14 13:52:12.316480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:34.483 Malloc0 00:16:34.483 [2024-07-14 13:52:12.375434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1428973 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1428973 /var/tmp/bdevperf.sock 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1428973 ']' 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:34.483 { 00:16:34.483 "params": { 00:16:34.483 "name": "Nvme$subsystem", 00:16:34.483 "trtype": "$TEST_TRANSPORT", 00:16:34.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.483 "adrfam": "ipv4", 00:16:34.483 "trsvcid": "$NVMF_PORT", 00:16:34.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.483 "hdgst": ${hdgst:-false}, 00:16:34.483 "ddgst": ${ddgst:-false} 00:16:34.483 }, 00:16:34.483 "method": "bdev_nvme_attach_controller" 00:16:34.483 } 00:16:34.483 EOF 00:16:34.483 )") 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:34.483 13:52:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:34.483 "params": { 00:16:34.483 "name": "Nvme0", 00:16:34.483 "trtype": "tcp", 00:16:34.483 "traddr": "10.0.0.2", 00:16:34.483 "adrfam": "ipv4", 00:16:34.483 "trsvcid": "4420", 00:16:34.483 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:34.483 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:34.483 "hdgst": false, 00:16:34.483 "ddgst": false 00:16:34.483 }, 00:16:34.483 "method": "bdev_nvme_attach_controller" 00:16:34.483 }' 00:16:34.483 [2024-07-14 13:52:12.454027] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:34.483 [2024-07-14 13:52:12.454102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428973 ] 00:16:34.741 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.741 [2024-07-14 13:52:12.515691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.741 [2024-07-14 13:52:12.602870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.001 Running I/O for 10 seconds... 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:35.001 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:35.002 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:35.002 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.002 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.002 13:52:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.002 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:35.002 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:35.002 13:52:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=552 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 552 -ge 100 ']' 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.262 13:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.262 [2024-07-14 13:52:13.178050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.262 [2024-07-14 13:52:13.178304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.178599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e60120 is same with the state(5) to be set 00:16:35.263 [2024-07-14 13:52:13.181563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.181620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.181648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.181678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.181705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.181720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.181735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.181750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.181766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.181781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.181798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.181813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.181830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.181845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.181861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.181883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.181902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.181918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.181934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.181949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.181965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.181980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 13:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.263 [2024-07-14 13:52:13.182486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.263 [2024-07-14 13:52:13.182628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.263 [2024-07-14 13:52:13.182643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:35.264 [2024-07-14 13:52:13.182659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.182674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.182690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.182705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.182721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.182736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.182756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.182771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 13:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.264 [2024-07-14 13:52:13.182787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.182803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.182819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.182834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.182850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.182865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.182888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 13:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.264 [2024-07-14 13:52:13.182904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.182921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.182942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.182958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.182973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.182989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:35.264 [2024-07-14 13:52:13.183664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.264 [2024-07-14 13:52:13.183700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:16:35.264 [2024-07-14 13:52:13.183773] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2462110 was disconnected and freed. reset controller. 00:16:35.264 [2024-07-14 13:52:13.184892] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:35.264 task offset: 81920 on job bdev=Nvme0n1 fails 00:16:35.264 00:16:35.264 Latency(us) 00:16:35.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.264 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:35.264 Job: Nvme0n1 ended in about 0.40 seconds with error 00:16:35.264 Verification LBA range: start 0x0 length 0x400 00:16:35.264 Nvme0n1 : 0.40 1584.71 99.04 158.47 0.00 35657.46 2864.17 33787.45 00:16:35.264 =================================================================================================================== 00:16:35.264 Total : 1584.71 99.04 158.47 0.00 35657.46 2864.17 33787.45 00:16:35.264 [2024-07-14 13:52:13.186783] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:35.264 [2024-07-14 13:52:13.186824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20511e0 (9): Bad file descriptor 00:16:35.264 13:52:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.264 13:52:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:35.522 [2024-07-14 13:52:13.288023] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1428973 00:16:36.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1428973) - No such process 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:36.472 { 00:16:36.472 "params": { 00:16:36.472 "name": "Nvme$subsystem", 00:16:36.472 "trtype": "$TEST_TRANSPORT", 00:16:36.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:36.472 "adrfam": "ipv4", 00:16:36.472 "trsvcid": "$NVMF_PORT", 00:16:36.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:36.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:36.472 "hdgst": ${hdgst:-false}, 00:16:36.472 "ddgst": ${ddgst:-false} 00:16:36.472 }, 00:16:36.472 "method": "bdev_nvme_attach_controller" 00:16:36.472 } 00:16:36.472 EOF 00:16:36.472 )") 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:36.472 13:52:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:36.472 "params": { 00:16:36.472 "name": "Nvme0", 00:16:36.472 "trtype": "tcp", 00:16:36.472 "traddr": "10.0.0.2", 00:16:36.472 "adrfam": "ipv4", 00:16:36.472 "trsvcid": "4420", 00:16:36.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:36.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:36.472 "hdgst": false, 00:16:36.472 "ddgst": false 00:16:36.472 }, 00:16:36.472 "method": "bdev_nvme_attach_controller" 00:16:36.472 }' 00:16:36.472 [2024-07-14 13:52:14.240853] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:36.472 [2024-07-14 13:52:14.240956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429133 ] 00:16:36.472 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.472 [2024-07-14 13:52:14.302535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.472 [2024-07-14 13:52:14.389910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.040 Running I/O for 1 seconds... 00:16:37.974 00:16:37.974 Latency(us) 00:16:37.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.974 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:37.974 Verification LBA range: start 0x0 length 0x400 00:16:37.974 Nvme0n1 : 1.06 1636.03 102.25 0.00 0.00 37117.36 8543.95 48739.37 00:16:37.974 =================================================================================================================== 00:16:37.974 Total : 1636.03 102.25 0.00 0.00 37117.36 8543.95 48739.37 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:38.232 rmmod nvme_tcp 00:16:38.232 rmmod nvme_fabrics 00:16:38.232 rmmod nvme_keyring 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1428808 ']' 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1428808 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 1428808 ']' 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 1428808 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1428808 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1428808' 00:16:38.232 killing process with pid 1428808 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 1428808 00:16:38.232 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 1428808 00:16:38.489 [2024-07-14 13:52:16.375907] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:38.489 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:38.489 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:38.489 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:38.489 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.489 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:38.489 13:52:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.489 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.489 13:52:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.026 13:52:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:41.026 13:52:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:41.026 00:16:41.026 real 0m8.695s 00:16:41.026 user 0m20.005s 00:16:41.026 sys 0m2.585s 00:16:41.026 13:52:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:41.026 13:52:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.026 ************************************ 00:16:41.026 END TEST nvmf_host_management 00:16:41.026 ************************************ 00:16:41.026 13:52:18 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:41.026 13:52:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:41.026 13:52:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:41.026 13:52:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:41.026 ************************************ 00:16:41.026 START TEST nvmf_lvol 00:16:41.026 ************************************ 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:41.026 * Looking for test storage... 00:16:41.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:41.026 13:52:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:41.027 13:52:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:42.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:42.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:42.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:42.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:42.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:16:42.933 00:16:42.933 --- 10.0.0.2 ping statistics --- 00:16:42.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.933 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:42.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:16:42.933 00:16:42.933 --- 10.0.0.1 ping statistics --- 00:16:42.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.933 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.933 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1431320 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1431320 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 1431320 ']' 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:42.934 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:42.934 [2024-07-14 13:52:20.712170] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:42.934 [2024-07-14 13:52:20.712257] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.934 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.934 [2024-07-14 13:52:20.777947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:42.934 [2024-07-14 13:52:20.865481] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.934 [2024-07-14 13:52:20.865535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.934 [2024-07-14 13:52:20.865548] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.934 [2024-07-14 13:52:20.865559] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.934 [2024-07-14 13:52:20.865569] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.934 [2024-07-14 13:52:20.865660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.934 [2024-07-14 13:52:20.865723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.934 [2024-07-14 13:52:20.865725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.193 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:43.193 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:43.193 13:52:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:43.193 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.193 13:52:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:43.193 13:52:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.193 13:52:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:43.451 [2024-07-14 13:52:21.247095] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.451 13:52:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.708 13:52:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:43.708 13:52:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.966 13:52:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:43.966 13:52:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:44.224 13:52:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:44.792 13:52:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=87c0872d-69ee-444b-a895-e35c8c70b3f6 00:16:44.792 13:52:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 87c0872d-69ee-444b-a895-e35c8c70b3f6 lvol 20 00:16:44.792 13:52:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=40d64288-68be-4e96-bb80-6519324ab139 00:16:44.792 13:52:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:45.050 13:52:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 40d64288-68be-4e96-bb80-6519324ab139 00:16:45.308 13:52:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:45.570 [2024-07-14 13:52:23.517248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.570 13:52:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:46.178 13:52:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1431746 00:16:46.178 13:52:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:46.178 13:52:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:46.178 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.110 13:52:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 40d64288-68be-4e96-bb80-6519324ab139 MY_SNAPSHOT 00:16:47.368 13:52:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7edaf967-3b6a-40c0-82dc-5b6ca8fefac2 00:16:47.368 13:52:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 40d64288-68be-4e96-bb80-6519324ab139 30 00:16:47.626 13:52:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7edaf967-3b6a-40c0-82dc-5b6ca8fefac2 MY_CLONE 00:16:47.884 13:52:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8ee93a14-b51e-4c44-8d08-5d7073b79e96 00:16:47.884 13:52:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8ee93a14-b51e-4c44-8d08-5d7073b79e96 00:16:48.450 13:52:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1431746 00:16:56.568 Initializing NVMe Controllers 00:16:56.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:56.568 Controller IO queue size 128, less than required. 00:16:56.568 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:56.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:56.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:56.568 Initialization complete. Launching workers. 00:16:56.568 ======================================================== 00:16:56.568 Latency(us) 00:16:56.568 Device Information : IOPS MiB/s Average min max 00:16:56.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10683.50 41.73 11987.59 1494.46 87672.02 00:16:56.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10578.70 41.32 12100.51 2168.85 76808.35 00:16:56.568 ======================================================== 00:16:56.568 Total : 21262.20 83.06 12043.77 1494.46 87672.02 00:16:56.568 00:16:56.568 13:52:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:56.568 13:52:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 40d64288-68be-4e96-bb80-6519324ab139 00:16:56.826 13:52:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 87c0872d-69ee-444b-a895-e35c8c70b3f6 00:16:57.085 13:52:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:57.085 13:52:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:57.085 13:52:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:57.085 13:52:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.085 13:52:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:57.085 13:52:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.085 13:52:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:57.085 13:52:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.085 13:52:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.085 rmmod nvme_tcp 00:16:57.085 rmmod nvme_fabrics 00:16:57.085 rmmod nvme_keyring 00:16:57.085 13:52:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:57.085 13:52:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:57.085 13:52:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:57.085 13:52:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1431320 ']' 00:16:57.085 13:52:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1431320 00:16:57.085 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 1431320 ']' 00:16:57.085 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 1431320 00:16:57.085 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:57.085 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:57.085 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1431320 00:16:57.344 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:57.344 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:57.344 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1431320' 00:16:57.344 killing process with pid 1431320 00:16:57.344 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 1431320 00:16:57.344 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 1431320 00:16:57.605 13:52:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:57.605 13:52:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:57.605 13:52:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:57.605 13:52:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:57.605 13:52:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:57.605 13:52:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.605 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.605 13:52:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.509 13:52:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:59.509 00:16:59.509 real 0m18.925s 00:16:59.509 user 1m5.134s 00:16:59.509 sys 0m5.541s 00:16:59.509 13:52:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:59.509 13:52:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:59.509 ************************************ 00:16:59.509 END TEST nvmf_lvol 00:16:59.509 ************************************ 00:16:59.509 13:52:37 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:59.509 13:52:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:59.509 13:52:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:59.509 13:52:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:59.509 ************************************ 00:16:59.509 START TEST nvmf_lvs_grow 00:16:59.509 ************************************ 00:16:59.509 13:52:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:59.767 * Looking for test storage... 00:16:59.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.767 13:52:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.767 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:59.767 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.767 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.767 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.767 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.767 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:59.768 13:52:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:01.669 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.669 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.669 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.669 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.669 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.669 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.669 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.669 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.669 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:01.670 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:01.670 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:01.670 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:01.670 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:17:01.670 00:17:01.670 --- 10.0.0.2 ping statistics --- 00:17:01.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.670 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:17:01.670 00:17:01.670 --- 10.0.0.1 ping statistics --- 00:17:01.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.670 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1435011 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1435011 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 1435011 ']' 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:01.670 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:01.929 [2024-07-14 13:52:39.680673] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:01.929 [2024-07-14 13:52:39.680761] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.929 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.929 [2024-07-14 13:52:39.746416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.929 [2024-07-14 13:52:39.829607] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.929 [2024-07-14 13:52:39.829673] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.929 [2024-07-14 13:52:39.829693] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.929 [2024-07-14 13:52:39.829704] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.929 [2024-07-14 13:52:39.829714] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.929 [2024-07-14 13:52:39.829740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.187 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.187 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:17:02.187 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.187 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.187 13:52:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:02.187 13:52:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.187 13:52:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:02.445 [2024-07-14 13:52:40.180544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:02.445 ************************************ 00:17:02.445 START TEST lvs_grow_clean 00:17:02.445 ************************************ 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:02.445 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:02.705 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:02.705 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:02.965 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=baeac60d-a39f-493b-bfc3-c39041912a88 00:17:02.965 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u baeac60d-a39f-493b-bfc3-c39041912a88 00:17:02.965 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:03.225 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:03.225 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:03.225 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u baeac60d-a39f-493b-bfc3-c39041912a88 lvol 150 00:17:03.484 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9fc2c998-3d39-4ecd-bfc6-dbfccbbadc5b 00:17:03.484 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:03.484 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:03.741 [2024-07-14 13:52:41.532210] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:03.741 [2024-07-14 13:52:41.532303] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:03.741 true 00:17:03.741 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u baeac60d-a39f-493b-bfc3-c39041912a88 00:17:03.741 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:04.000 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:04.000 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:04.261 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9fc2c998-3d39-4ecd-bfc6-dbfccbbadc5b 00:17:04.520 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:04.779 [2024-07-14 13:52:42.523295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.779 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:05.038 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1435446 00:17:05.038 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:05.038 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:05.038 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1435446 /var/tmp/bdevperf.sock 00:17:05.038 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 1435446 ']' 00:17:05.038 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.038 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:05.038 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.038 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:05.038 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:05.038 [2024-07-14 13:52:42.857648] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:05.038 [2024-07-14 13:52:42.857737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435446 ] 00:17:05.038 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.038 [2024-07-14 13:52:42.921404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.038 [2024-07-14 13:52:43.013575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.297 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:05.297 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:05.297 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:05.864 Nvme0n1 00:17:05.864 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:06.123 [ 00:17:06.123 { 00:17:06.123 "name": "Nvme0n1", 00:17:06.123 "aliases": [ 00:17:06.123 "9fc2c998-3d39-4ecd-bfc6-dbfccbbadc5b" 00:17:06.123 ], 00:17:06.123 "product_name": "NVMe disk", 00:17:06.123 "block_size": 4096, 00:17:06.123 "num_blocks": 38912, 00:17:06.123 "uuid": "9fc2c998-3d39-4ecd-bfc6-dbfccbbadc5b", 00:17:06.123 "assigned_rate_limits": { 00:17:06.123 "rw_ios_per_sec": 0, 00:17:06.123 "rw_mbytes_per_sec": 0, 00:17:06.123 "r_mbytes_per_sec": 0, 00:17:06.123 "w_mbytes_per_sec": 0 00:17:06.123 }, 00:17:06.123 "claimed": false, 00:17:06.123 "zoned": false, 00:17:06.123 "supported_io_types": { 00:17:06.123 "read": true, 00:17:06.123 "write": true, 00:17:06.123 "unmap": true, 00:17:06.123 "write_zeroes": true, 00:17:06.123 "flush": true, 00:17:06.123 "reset": true, 00:17:06.123 "compare": true, 00:17:06.123 "compare_and_write": true, 00:17:06.123 "abort": true, 00:17:06.123 "nvme_admin": true, 00:17:06.123 "nvme_io": true 00:17:06.123 }, 00:17:06.123 "memory_domains": [ 00:17:06.123 { 00:17:06.123 "dma_device_id": "system", 00:17:06.123 "dma_device_type": 1 00:17:06.123 } 00:17:06.123 ], 00:17:06.123 "driver_specific": { 00:17:06.123 "nvme": [ 00:17:06.123 { 00:17:06.123 "trid": { 00:17:06.123 "trtype": "TCP", 00:17:06.123 "adrfam": "IPv4", 00:17:06.123 "traddr": "10.0.0.2", 00:17:06.123 "trsvcid": "4420", 00:17:06.123 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:06.123 }, 00:17:06.123 "ctrlr_data": { 00:17:06.123 "cntlid": 1, 00:17:06.123 "vendor_id": "0x8086", 00:17:06.123 "model_number": "SPDK bdev Controller", 00:17:06.123 "serial_number": "SPDK0", 00:17:06.123 "firmware_revision": "24.05.1", 00:17:06.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:06.123 "oacs": { 00:17:06.123 "security": 0, 00:17:06.123 "format": 0, 00:17:06.123 "firmware": 0, 00:17:06.123 "ns_manage": 0 00:17:06.123 }, 00:17:06.123 "multi_ctrlr": true, 00:17:06.123 "ana_reporting": false 00:17:06.123 }, 00:17:06.123 "vs": { 00:17:06.123 "nvme_version": "1.3" 00:17:06.123 }, 00:17:06.123 "ns_data": { 00:17:06.123 "id": 1, 00:17:06.123 "can_share": true 00:17:06.123 } 00:17:06.123 } 00:17:06.123 ], 00:17:06.123 "mp_policy": "active_passive" 00:17:06.123 } 00:17:06.123 } 00:17:06.123 ] 00:17:06.123 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1435578 00:17:06.123 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:06.123 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:06.123 Running I/O for 10 seconds... 00:17:07.062 Latency(us) 00:17:07.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.062 Nvme0n1 : 1.00 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:17:07.062 =================================================================================================================== 00:17:07.062 Total : 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:17:07.062 00:17:08.018 13:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u baeac60d-a39f-493b-bfc3-c39041912a88 00:17:08.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.295 Nvme0n1 : 2.00 14487.00 56.59 0.00 0.00 0.00 0.00 0.00 00:17:08.295 =================================================================================================================== 00:17:08.295 Total : 14487.00 56.59 0.00 0.00 0.00 0.00 0.00 00:17:08.295 00:17:08.295 true 00:17:08.295 13:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u baeac60d-a39f-493b-bfc3-c39041912a88 00:17:08.295 13:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:08.553 13:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:08.553 13:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:08.553 13:52:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1435578 00:17:09.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:09.121 Nvme0n1 : 3.00 14632.33 57.16 0.00 0.00 0.00 0.00 0.00 00:17:09.121 =================================================================================================================== 00:17:09.121 Total : 14632.33 57.16 0.00 0.00 0.00 0.00 0.00 00:17:09.121 00:17:10.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.503 Nvme0n1 : 4.00 14704.75 57.44 0.00 0.00 0.00 0.00 0.00 00:17:10.503 =================================================================================================================== 00:17:10.503 Total : 14704.75 57.44 0.00 0.00 0.00 0.00 0.00 00:17:10.503 00:17:11.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.073 Nvme0n1 : 5.00 14761.00 57.66 0.00 0.00 0.00 0.00 0.00 00:17:11.073 =================================================================================================================== 00:17:11.073 Total : 14761.00 57.66 0.00 0.00 0.00 0.00 0.00 00:17:11.073 00:17:12.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.454 Nvme0n1 : 6.00 14798.50 57.81 0.00 0.00 0.00 0.00 0.00 00:17:12.454 =================================================================================================================== 00:17:12.454 Total : 14798.50 57.81 0.00 0.00 0.00 0.00 0.00 00:17:12.454 00:17:13.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.392 Nvme0n1 : 7.00 14843.43 57.98 0.00 0.00 0.00 0.00 0.00 00:17:13.392 =================================================================================================================== 00:17:13.392 Total : 14843.43 57.98 0.00 0.00 0.00 0.00 0.00 00:17:13.392 00:17:14.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.330 Nvme0n1 : 8.00 14861.25 58.05 0.00 0.00 0.00 0.00 0.00 00:17:14.330 =================================================================================================================== 00:17:14.330 Total : 14861.25 58.05 0.00 0.00 0.00 0.00 0.00 00:17:14.330 00:17:15.267 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.267 Nvme0n1 : 9.00 14889.22 58.16 0.00 0.00 0.00 0.00 0.00 00:17:15.267 =================================================================================================================== 00:17:15.267 Total : 14889.22 58.16 0.00 0.00 0.00 0.00 0.00 00:17:15.267 00:17:16.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.202 Nvme0n1 : 10.00 14918.00 58.27 0.00 0.00 0.00 0.00 0.00 00:17:16.202 =================================================================================================================== 00:17:16.202 Total : 14918.00 58.27 0.00 0.00 0.00 0.00 0.00 00:17:16.202 00:17:16.202 00:17:16.202 Latency(us) 00:17:16.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.202 Nvme0n1 : 10.01 14922.11 58.29 0.00 0.00 8572.65 4660.34 17767.54 00:17:16.202 =================================================================================================================== 00:17:16.202 Total : 14922.11 58.29 0.00 0.00 8572.65 4660.34 17767.54 00:17:16.202 0 00:17:16.202 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1435446 00:17:16.202 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 1435446 ']' 00:17:16.202 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 1435446 00:17:16.202 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:16.202 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:16.202 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1435446 00:17:16.202 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:16.202 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:16.202 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1435446' 00:17:16.202 killing process with pid 1435446 00:17:16.202 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 1435446 00:17:16.202 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.202 00:17:16.202 Latency(us) 00:17:16.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.202 =================================================================================================================== 00:17:16.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.202 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 1435446 00:17:16.464 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:16.721 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:16.978 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u baeac60d-a39f-493b-bfc3-c39041912a88 00:17:16.978 13:52:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:17.236 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:17.236 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:17.236 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:17.495 [2024-07-14 13:52:55.291650] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u baeac60d-a39f-493b-bfc3-c39041912a88 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u baeac60d-a39f-493b-bfc3-c39041912a88 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:17.495 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u baeac60d-a39f-493b-bfc3-c39041912a88 00:17:17.753 request: 00:17:17.753 { 00:17:17.753 "uuid": "baeac60d-a39f-493b-bfc3-c39041912a88", 00:17:17.753 "method": "bdev_lvol_get_lvstores", 00:17:17.753 "req_id": 1 00:17:17.753 } 00:17:17.753 Got JSON-RPC error response 00:17:17.753 response: 00:17:17.753 { 00:17:17.753 "code": -19, 00:17:17.753 "message": "No such device" 00:17:17.753 } 00:17:17.753 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:17.753 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:17.753 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:17.753 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:17.753 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:18.011 aio_bdev 00:17:18.011 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9fc2c998-3d39-4ecd-bfc6-dbfccbbadc5b 00:17:18.011 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=9fc2c998-3d39-4ecd-bfc6-dbfccbbadc5b 00:17:18.011 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:18.011 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:18.011 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:18.012 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:18.012 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:18.270 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9fc2c998-3d39-4ecd-bfc6-dbfccbbadc5b -t 2000 00:17:18.529 [ 00:17:18.529 { 00:17:18.529 "name": "9fc2c998-3d39-4ecd-bfc6-dbfccbbadc5b", 00:17:18.529 "aliases": [ 00:17:18.529 "lvs/lvol" 00:17:18.529 ], 00:17:18.529 "product_name": "Logical Volume", 00:17:18.529 "block_size": 4096, 00:17:18.529 "num_blocks": 38912, 00:17:18.529 "uuid": "9fc2c998-3d39-4ecd-bfc6-dbfccbbadc5b", 00:17:18.529 "assigned_rate_limits": { 00:17:18.529 "rw_ios_per_sec": 0, 00:17:18.529 "rw_mbytes_per_sec": 0, 00:17:18.529 "r_mbytes_per_sec": 0, 00:17:18.529 "w_mbytes_per_sec": 0 00:17:18.529 }, 00:17:18.529 "claimed": false, 00:17:18.529 "zoned": false, 00:17:18.529 "supported_io_types": { 00:17:18.529 "read": true, 00:17:18.529 "write": true, 00:17:18.529 "unmap": true, 00:17:18.529 "write_zeroes": true, 00:17:18.529 "flush": false, 00:17:18.529 "reset": true, 00:17:18.529 "compare": false, 00:17:18.529 "compare_and_write": false, 00:17:18.529 "abort": false, 00:17:18.529 "nvme_admin": false, 00:17:18.529 "nvme_io": false 00:17:18.529 }, 00:17:18.529 "driver_specific": { 00:17:18.529 "lvol": { 00:17:18.529 "lvol_store_uuid": "baeac60d-a39f-493b-bfc3-c39041912a88", 00:17:18.529 "base_bdev": "aio_bdev", 00:17:18.529 "thin_provision": false, 00:17:18.529 "num_allocated_clusters": 38, 00:17:18.529 "snapshot": false, 00:17:18.529 "clone": false, 00:17:18.529 "esnap_clone": false 00:17:18.529 } 00:17:18.529 } 00:17:18.529 } 00:17:18.529 ] 00:17:18.529 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:18.529 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u baeac60d-a39f-493b-bfc3-c39041912a88 00:17:18.529 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:18.788 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:18.788 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u baeac60d-a39f-493b-bfc3-c39041912a88 00:17:18.788 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:19.047 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:19.047 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9fc2c998-3d39-4ecd-bfc6-dbfccbbadc5b 00:17:19.307 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u baeac60d-a39f-493b-bfc3-c39041912a88 00:17:19.566 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:19.824 00:17:19.824 real 0m17.475s 00:17:19.824 user 0m16.856s 00:17:19.824 sys 0m1.939s 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:19.824 ************************************ 00:17:19.824 END TEST lvs_grow_clean 00:17:19.824 ************************************ 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:19.824 ************************************ 00:17:19.824 START TEST lvs_grow_dirty 00:17:19.824 ************************************ 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:19.824 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:20.083 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:20.083 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:20.342 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=56e681de-bd6e-4a20-850d-6364459ba127 00:17:20.342 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:20.342 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:20.602 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:20.602 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:20.602 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 56e681de-bd6e-4a20-850d-6364459ba127 lvol 150 00:17:20.861 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7e97426b-a91f-44ac-af01-96533edfbe93 00:17:20.861 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:20.861 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:21.119 [2024-07-14 13:52:58.972088] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:21.119 [2024-07-14 13:52:58.972215] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:21.119 true 00:17:21.119 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:21.119 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:21.378 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:21.378 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:21.636 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e97426b-a91f-44ac-af01-96533edfbe93 00:17:21.895 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:22.153 [2024-07-14 13:53:00.003301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.153 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:22.410 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1437487 00:17:22.410 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:22.411 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:22.411 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1437487 /var/tmp/bdevperf.sock 00:17:22.411 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1437487 ']' 00:17:22.411 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.411 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:22.411 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.411 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:22.411 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:22.411 [2024-07-14 13:53:00.301452] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:22.411 [2024-07-14 13:53:00.301520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437487 ] 00:17:22.411 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.411 [2024-07-14 13:53:00.364814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.669 [2024-07-14 13:53:00.457071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.669 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:22.669 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:22.669 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:22.928 Nvme0n1 00:17:22.928 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:23.185 [ 00:17:23.185 { 00:17:23.185 "name": "Nvme0n1", 00:17:23.185 "aliases": [ 00:17:23.185 "7e97426b-a91f-44ac-af01-96533edfbe93" 00:17:23.185 ], 00:17:23.185 "product_name": "NVMe disk", 00:17:23.185 "block_size": 4096, 00:17:23.185 "num_blocks": 38912, 00:17:23.185 "uuid": "7e97426b-a91f-44ac-af01-96533edfbe93", 00:17:23.185 "assigned_rate_limits": { 00:17:23.185 "rw_ios_per_sec": 0, 00:17:23.185 "rw_mbytes_per_sec": 0, 00:17:23.185 "r_mbytes_per_sec": 0, 00:17:23.185 "w_mbytes_per_sec": 0 00:17:23.185 }, 00:17:23.185 "claimed": false, 00:17:23.185 "zoned": false, 00:17:23.185 "supported_io_types": { 00:17:23.185 "read": true, 00:17:23.185 "write": true, 00:17:23.185 "unmap": true, 00:17:23.185 "write_zeroes": true, 00:17:23.185 "flush": true, 00:17:23.185 "reset": true, 00:17:23.185 "compare": true, 00:17:23.185 "compare_and_write": true, 00:17:23.185 "abort": true, 00:17:23.185 "nvme_admin": true, 00:17:23.185 "nvme_io": true 00:17:23.185 }, 00:17:23.185 "memory_domains": [ 00:17:23.185 { 00:17:23.185 "dma_device_id": "system", 00:17:23.185 "dma_device_type": 1 00:17:23.185 } 00:17:23.185 ], 00:17:23.185 "driver_specific": { 00:17:23.185 "nvme": [ 00:17:23.185 { 00:17:23.185 "trid": { 00:17:23.185 "trtype": "TCP", 00:17:23.185 "adrfam": "IPv4", 00:17:23.185 "traddr": "10.0.0.2", 00:17:23.185 "trsvcid": "4420", 00:17:23.185 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:23.185 }, 00:17:23.185 "ctrlr_data": { 00:17:23.185 "cntlid": 1, 00:17:23.185 "vendor_id": "0x8086", 00:17:23.185 "model_number": "SPDK bdev Controller", 00:17:23.185 "serial_number": "SPDK0", 00:17:23.185 "firmware_revision": "24.05.1", 00:17:23.185 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:23.185 "oacs": { 00:17:23.185 "security": 0, 00:17:23.185 "format": 0, 00:17:23.185 "firmware": 0, 00:17:23.185 "ns_manage": 0 00:17:23.185 }, 00:17:23.185 "multi_ctrlr": true, 00:17:23.185 "ana_reporting": false 00:17:23.185 }, 00:17:23.185 "vs": { 00:17:23.185 "nvme_version": "1.3" 00:17:23.185 }, 00:17:23.185 "ns_data": { 00:17:23.185 "id": 1, 00:17:23.185 "can_share": true 00:17:23.185 } 00:17:23.185 } 00:17:23.185 ], 00:17:23.185 "mp_policy": "active_passive" 00:17:23.185 } 00:17:23.185 } 00:17:23.185 ] 00:17:23.185 13:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1437624 00:17:23.185 13:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:23.185 13:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:23.443 Running I/O for 10 seconds... 00:17:24.379 Latency(us) 00:17:24.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.379 Nvme0n1 : 1.00 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:17:24.379 =================================================================================================================== 00:17:24.379 Total : 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:17:24.379 00:17:25.369 13:53:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:25.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.369 Nvme0n1 : 2.00 14351.50 56.06 0.00 0.00 0.00 0.00 0.00 00:17:25.369 =================================================================================================================== 00:17:25.369 Total : 14351.50 56.06 0.00 0.00 0.00 0.00 0.00 00:17:25.369 00:17:25.628 true 00:17:25.628 13:53:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:25.628 13:53:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:25.886 13:53:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:25.886 13:53:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:25.886 13:53:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1437624 00:17:26.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.456 Nvme0n1 : 3.00 14436.00 56.39 0.00 0.00 0.00 0.00 0.00 00:17:26.456 =================================================================================================================== 00:17:26.456 Total : 14436.00 56.39 0.00 0.00 0.00 0.00 0.00 00:17:26.456 00:17:27.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.395 Nvme0n1 : 4.00 14541.75 56.80 0.00 0.00 0.00 0.00 0.00 00:17:27.395 =================================================================================================================== 00:17:27.395 Total : 14541.75 56.80 0.00 0.00 0.00 0.00 0.00 00:17:27.395 00:17:28.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.331 Nvme0n1 : 5.00 14659.40 57.26 0.00 0.00 0.00 0.00 0.00 00:17:28.331 =================================================================================================================== 00:17:28.331 Total : 14659.40 57.26 0.00 0.00 0.00 0.00 0.00 00:17:28.331 00:17:29.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.265 Nvme0n1 : 6.00 14692.67 57.39 0.00 0.00 0.00 0.00 0.00 00:17:29.265 =================================================================================================================== 00:17:29.265 Total : 14692.67 57.39 0.00 0.00 0.00 0.00 0.00 00:17:29.265 00:17:30.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.642 Nvme0n1 : 7.00 14716.43 57.49 0.00 0.00 0.00 0.00 0.00 00:17:30.642 =================================================================================================================== 00:17:30.642 Total : 14716.43 57.49 0.00 0.00 0.00 0.00 0.00 00:17:30.642 00:17:31.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.577 Nvme0n1 : 8.00 14734.25 57.56 0.00 0.00 0.00 0.00 0.00 00:17:31.577 =================================================================================================================== 00:17:31.577 Total : 14734.25 57.56 0.00 0.00 0.00 0.00 0.00 00:17:31.577 00:17:32.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.515 Nvme0n1 : 9.00 14748.11 57.61 0.00 0.00 0.00 0.00 0.00 00:17:32.515 =================================================================================================================== 00:17:32.515 Total : 14748.11 57.61 0.00 0.00 0.00 0.00 0.00 00:17:32.515 00:17:33.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.451 Nvme0n1 : 10.00 14784.60 57.75 0.00 0.00 0.00 0.00 0.00 00:17:33.451 =================================================================================================================== 00:17:33.451 Total : 14784.60 57.75 0.00 0.00 0.00 0.00 0.00 00:17:33.451 00:17:33.451 00:17:33.451 Latency(us) 00:17:33.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.451 Nvme0n1 : 10.01 14788.57 57.77 0.00 0.00 8650.73 2233.08 19709.35 00:17:33.451 =================================================================================================================== 00:17:33.451 Total : 14788.57 57.77 0.00 0.00 8650.73 2233.08 19709.35 00:17:33.451 0 00:17:33.451 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1437487 00:17:33.451 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 1437487 ']' 00:17:33.451 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 1437487 00:17:33.451 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:33.451 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:33.451 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1437487 00:17:33.451 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:33.451 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:33.451 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1437487' 00:17:33.451 killing process with pid 1437487 00:17:33.451 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 1437487 00:17:33.451 Received shutdown signal, test time was about 10.000000 seconds 00:17:33.451 00:17:33.451 Latency(us) 00:17:33.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.451 =================================================================================================================== 00:17:33.451 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:33.451 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 1437487 00:17:33.709 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:33.967 13:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:34.225 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:34.225 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1435011 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1435011 00:17:34.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1435011 Killed "${NVMF_APP[@]}" "$@" 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1438948 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1438948 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1438948 ']' 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:34.483 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:34.483 [2024-07-14 13:53:12.403172] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:34.483 [2024-07-14 13:53:12.403269] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.483 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.741 [2024-07-14 13:53:12.475209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.741 [2024-07-14 13:53:12.563077] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.741 [2024-07-14 13:53:12.563148] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.741 [2024-07-14 13:53:12.563163] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.741 [2024-07-14 13:53:12.563174] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.741 [2024-07-14 13:53:12.563184] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.741 [2024-07-14 13:53:12.563225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.741 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:34.741 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:34.741 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.741 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.741 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:34.742 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.742 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:35.001 [2024-07-14 13:53:12.970866] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:35.001 [2024-07-14 13:53:12.971010] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:35.001 [2024-07-14 13:53:12.971066] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:35.259 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:35.259 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7e97426b-a91f-44ac-af01-96533edfbe93 00:17:35.259 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=7e97426b-a91f-44ac-af01-96533edfbe93 00:17:35.259 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:35.259 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:35.259 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:35.259 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:35.259 13:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:35.516 13:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e97426b-a91f-44ac-af01-96533edfbe93 -t 2000 00:17:35.516 [ 00:17:35.516 { 00:17:35.516 "name": "7e97426b-a91f-44ac-af01-96533edfbe93", 00:17:35.516 "aliases": [ 00:17:35.516 "lvs/lvol" 00:17:35.516 ], 00:17:35.516 "product_name": "Logical Volume", 00:17:35.516 "block_size": 4096, 00:17:35.516 "num_blocks": 38912, 00:17:35.516 "uuid": "7e97426b-a91f-44ac-af01-96533edfbe93", 00:17:35.516 "assigned_rate_limits": { 00:17:35.516 "rw_ios_per_sec": 0, 00:17:35.516 "rw_mbytes_per_sec": 0, 00:17:35.516 "r_mbytes_per_sec": 0, 00:17:35.516 "w_mbytes_per_sec": 0 00:17:35.516 }, 00:17:35.516 "claimed": false, 00:17:35.516 "zoned": false, 00:17:35.516 "supported_io_types": { 00:17:35.516 "read": true, 00:17:35.516 "write": true, 00:17:35.516 "unmap": true, 00:17:35.516 "write_zeroes": true, 00:17:35.516 "flush": false, 00:17:35.516 "reset": true, 00:17:35.516 "compare": false, 00:17:35.516 "compare_and_write": false, 00:17:35.516 "abort": false, 00:17:35.516 "nvme_admin": false, 00:17:35.516 "nvme_io": false 00:17:35.516 }, 00:17:35.516 "driver_specific": { 00:17:35.516 "lvol": { 00:17:35.516 "lvol_store_uuid": "56e681de-bd6e-4a20-850d-6364459ba127", 00:17:35.516 "base_bdev": "aio_bdev", 00:17:35.516 "thin_provision": false, 00:17:35.516 "num_allocated_clusters": 38, 00:17:35.516 "snapshot": false, 00:17:35.516 "clone": false, 00:17:35.516 "esnap_clone": false 00:17:35.516 } 00:17:35.516 } 00:17:35.516 } 00:17:35.516 ] 00:17:35.516 13:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:35.773 13:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:35.773 13:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:35.773 13:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:35.774 13:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:35.774 13:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:36.032 13:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:36.032 13:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:36.292 [2024-07-14 13:53:14.223815] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:36.292 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:36.551 request: 00:17:36.551 { 00:17:36.551 "uuid": "56e681de-bd6e-4a20-850d-6364459ba127", 00:17:36.551 "method": "bdev_lvol_get_lvstores", 00:17:36.551 "req_id": 1 00:17:36.551 } 00:17:36.551 Got JSON-RPC error response 00:17:36.551 response: 00:17:36.551 { 00:17:36.551 "code": -19, 00:17:36.551 "message": "No such device" 00:17:36.551 } 00:17:36.551 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:36.551 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:36.551 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:36.551 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:36.551 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:36.808 aio_bdev 00:17:36.808 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7e97426b-a91f-44ac-af01-96533edfbe93 00:17:36.808 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=7e97426b-a91f-44ac-af01-96533edfbe93 00:17:36.808 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:36.808 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:36.808 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:36.808 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:36.808 13:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:37.066 13:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e97426b-a91f-44ac-af01-96533edfbe93 -t 2000 00:17:37.324 [ 00:17:37.324 { 00:17:37.324 "name": "7e97426b-a91f-44ac-af01-96533edfbe93", 00:17:37.324 "aliases": [ 00:17:37.324 "lvs/lvol" 00:17:37.324 ], 00:17:37.324 "product_name": "Logical Volume", 00:17:37.324 "block_size": 4096, 00:17:37.324 "num_blocks": 38912, 00:17:37.324 "uuid": "7e97426b-a91f-44ac-af01-96533edfbe93", 00:17:37.324 "assigned_rate_limits": { 00:17:37.324 "rw_ios_per_sec": 0, 00:17:37.324 "rw_mbytes_per_sec": 0, 00:17:37.324 "r_mbytes_per_sec": 0, 00:17:37.324 "w_mbytes_per_sec": 0 00:17:37.324 }, 00:17:37.324 "claimed": false, 00:17:37.324 "zoned": false, 00:17:37.324 "supported_io_types": { 00:17:37.324 "read": true, 00:17:37.324 "write": true, 00:17:37.324 "unmap": true, 00:17:37.324 "write_zeroes": true, 00:17:37.324 "flush": false, 00:17:37.324 "reset": true, 00:17:37.324 "compare": false, 00:17:37.324 "compare_and_write": false, 00:17:37.324 "abort": false, 00:17:37.324 "nvme_admin": false, 00:17:37.324 "nvme_io": false 00:17:37.324 }, 00:17:37.324 "driver_specific": { 00:17:37.324 "lvol": { 00:17:37.324 "lvol_store_uuid": "56e681de-bd6e-4a20-850d-6364459ba127", 00:17:37.324 "base_bdev": "aio_bdev", 00:17:37.324 "thin_provision": false, 00:17:37.324 "num_allocated_clusters": 38, 00:17:37.324 "snapshot": false, 00:17:37.324 "clone": false, 00:17:37.324 "esnap_clone": false 00:17:37.324 } 00:17:37.324 } 00:17:37.324 } 00:17:37.324 ] 00:17:37.324 13:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:37.324 13:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:37.324 13:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:37.583 13:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:37.583 13:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:37.583 13:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:37.842 13:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:37.842 13:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e97426b-a91f-44ac-af01-96533edfbe93 00:17:38.101 13:53:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 56e681de-bd6e-4a20-850d-6364459ba127 00:17:38.359 13:53:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:38.617 13:53:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:38.617 00:17:38.617 real 0m18.839s 00:17:38.617 user 0m47.864s 00:17:38.617 sys 0m4.601s 00:17:38.617 13:53:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:38.617 13:53:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:38.617 ************************************ 00:17:38.617 END TEST lvs_grow_dirty 00:17:38.617 ************************************ 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:38.876 nvmf_trace.0 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:38.876 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:38.876 rmmod nvme_tcp 00:17:38.877 rmmod nvme_fabrics 00:17:38.877 rmmod nvme_keyring 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1438948 ']' 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1438948 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 1438948 ']' 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 1438948 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1438948 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1438948' 00:17:38.877 killing process with pid 1438948 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 1438948 00:17:38.877 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 1438948 00:17:39.137 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:39.137 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:39.137 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:39.137 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.137 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:39.137 13:53:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.137 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.137 13:53:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.045 13:53:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:41.045 00:17:41.046 real 0m41.530s 00:17:41.046 user 1m10.355s 00:17:41.046 sys 0m8.380s 00:17:41.046 13:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:41.046 13:53:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:41.046 ************************************ 00:17:41.046 END TEST nvmf_lvs_grow 00:17:41.046 ************************************ 00:17:41.046 13:53:19 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:41.046 13:53:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:41.046 13:53:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:41.046 13:53:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:41.304 ************************************ 00:17:41.304 START TEST nvmf_bdev_io_wait 00:17:41.304 ************************************ 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:41.304 * Looking for test storage... 00:17:41.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.304 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:41.305 13:53:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:43.281 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:43.281 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.281 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:43.282 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:43.282 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:43.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:17:43.282 00:17:43.282 --- 10.0.0.2 ping statistics --- 00:17:43.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.282 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:17:43.282 00:17:43.282 --- 10.0.0.1 ping statistics --- 00:17:43.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.282 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1441353 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1441353 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 1441353 ']' 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:43.282 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.282 [2024-07-14 13:53:21.223408] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:43.282 [2024-07-14 13:53:21.223501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.282 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.542 [2024-07-14 13:53:21.289638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.542 [2024-07-14 13:53:21.380582] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.542 [2024-07-14 13:53:21.380631] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.542 [2024-07-14 13:53:21.380657] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.542 [2024-07-14 13:53:21.380668] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.542 [2024-07-14 13:53:21.380678] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.542 [2024-07-14 13:53:21.380757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.542 [2024-07-14 13:53:21.380822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.542 [2024-07-14 13:53:21.380956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.542 [2024-07-14 13:53:21.380959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.542 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.801 [2024-07-14 13:53:21.549784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.801 Malloc0 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.801 [2024-07-14 13:53:21.610588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1441493 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1441494 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1441497 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:43.801 { 00:17:43.801 "params": { 00:17:43.801 "name": "Nvme$subsystem", 00:17:43.801 "trtype": "$TEST_TRANSPORT", 00:17:43.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.801 "adrfam": "ipv4", 00:17:43.801 "trsvcid": "$NVMF_PORT", 00:17:43.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.801 "hdgst": ${hdgst:-false}, 00:17:43.801 "ddgst": ${ddgst:-false} 00:17:43.801 }, 00:17:43.801 "method": "bdev_nvme_attach_controller" 00:17:43.801 } 00:17:43.801 EOF 00:17:43.801 )") 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:43.801 { 00:17:43.801 "params": { 00:17:43.801 "name": "Nvme$subsystem", 00:17:43.801 "trtype": "$TEST_TRANSPORT", 00:17:43.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.801 "adrfam": "ipv4", 00:17:43.801 "trsvcid": "$NVMF_PORT", 00:17:43.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.801 "hdgst": ${hdgst:-false}, 00:17:43.801 "ddgst": ${ddgst:-false} 00:17:43.801 }, 00:17:43.801 "method": "bdev_nvme_attach_controller" 00:17:43.801 } 00:17:43.801 EOF 00:17:43.801 )") 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1441499 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:43.801 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:43.802 { 00:17:43.802 "params": { 00:17:43.802 "name": "Nvme$subsystem", 00:17:43.802 "trtype": "$TEST_TRANSPORT", 00:17:43.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.802 "adrfam": "ipv4", 00:17:43.802 "trsvcid": "$NVMF_PORT", 00:17:43.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.802 "hdgst": ${hdgst:-false}, 00:17:43.802 "ddgst": ${ddgst:-false} 00:17:43.802 }, 00:17:43.802 "method": "bdev_nvme_attach_controller" 00:17:43.802 } 00:17:43.802 EOF 00:17:43.802 )") 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:43.802 { 00:17:43.802 "params": { 00:17:43.802 "name": "Nvme$subsystem", 00:17:43.802 "trtype": "$TEST_TRANSPORT", 00:17:43.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.802 "adrfam": "ipv4", 00:17:43.802 "trsvcid": "$NVMF_PORT", 00:17:43.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.802 "hdgst": ${hdgst:-false}, 00:17:43.802 "ddgst": ${ddgst:-false} 00:17:43.802 }, 00:17:43.802 "method": "bdev_nvme_attach_controller" 00:17:43.802 } 00:17:43.802 EOF 00:17:43.802 )") 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1441493 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:43.802 "params": { 00:17:43.802 "name": "Nvme1", 00:17:43.802 "trtype": "tcp", 00:17:43.802 "traddr": "10.0.0.2", 00:17:43.802 "adrfam": "ipv4", 00:17:43.802 "trsvcid": "4420", 00:17:43.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.802 "hdgst": false, 00:17:43.802 "ddgst": false 00:17:43.802 }, 00:17:43.802 "method": "bdev_nvme_attach_controller" 00:17:43.802 }' 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:43.802 "params": { 00:17:43.802 "name": "Nvme1", 00:17:43.802 "trtype": "tcp", 00:17:43.802 "traddr": "10.0.0.2", 00:17:43.802 "adrfam": "ipv4", 00:17:43.802 "trsvcid": "4420", 00:17:43.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.802 "hdgst": false, 00:17:43.802 "ddgst": false 00:17:43.802 }, 00:17:43.802 "method": "bdev_nvme_attach_controller" 00:17:43.802 }' 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:43.802 "params": { 00:17:43.802 "name": "Nvme1", 00:17:43.802 "trtype": "tcp", 00:17:43.802 "traddr": "10.0.0.2", 00:17:43.802 "adrfam": "ipv4", 00:17:43.802 "trsvcid": "4420", 00:17:43.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.802 "hdgst": false, 00:17:43.802 "ddgst": false 00:17:43.802 }, 00:17:43.802 "method": "bdev_nvme_attach_controller" 00:17:43.802 }' 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:43.802 13:53:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:43.802 "params": { 00:17:43.802 "name": "Nvme1", 00:17:43.802 "trtype": "tcp", 00:17:43.802 "traddr": "10.0.0.2", 00:17:43.802 "adrfam": "ipv4", 00:17:43.802 "trsvcid": "4420", 00:17:43.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.802 "hdgst": false, 00:17:43.802 "ddgst": false 00:17:43.802 }, 00:17:43.802 "method": "bdev_nvme_attach_controller" 00:17:43.802 }' 00:17:43.802 [2024-07-14 13:53:21.658540] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:43.802 [2024-07-14 13:53:21.658540] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:43.802 [2024-07-14 13:53:21.658540] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:43.802 [2024-07-14 13:53:21.658545] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:43.802 [2024-07-14 13:53:21.658635] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:43.802 [2024-07-14 13:53:21.658647] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 13:53:21.658647] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 13:53:21.658648] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:43.802 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:43.802 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:43.802 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.062 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.062 [2024-07-14 13:53:21.835511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.062 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.062 [2024-07-14 13:53:21.910956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:44.062 [2024-07-14 13:53:21.936232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.062 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.062 [2024-07-14 13:53:22.012209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:44.062 [2024-07-14 13:53:22.040269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.322 [2024-07-14 13:53:22.114510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.322 [2024-07-14 13:53:22.120722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:44.322 [2024-07-14 13:53:22.184943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:44.607 Running I/O for 1 seconds... 00:17:44.607 Running I/O for 1 seconds... 00:17:44.607 Running I/O for 1 seconds... 00:17:44.607 Running I/O for 1 seconds... 00:17:45.548 00:17:45.548 Latency(us) 00:17:45.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.548 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:45.548 Nvme1n1 : 1.00 165417.47 646.16 0.00 0.00 770.88 310.99 995.18 00:17:45.548 =================================================================================================================== 00:17:45.548 Total : 165417.47 646.16 0.00 0.00 770.88 310.99 995.18 00:17:45.548 00:17:45.548 Latency(us) 00:17:45.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.548 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:45.548 Nvme1n1 : 1.01 9258.97 36.17 0.00 0.00 13762.80 8058.50 20097.71 00:17:45.548 =================================================================================================================== 00:17:45.548 Total : 9258.97 36.17 0.00 0.00 13762.80 8058.50 20097.71 00:17:45.548 00:17:45.548 Latency(us) 00:17:45.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.548 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:45.548 Nvme1n1 : 1.01 8727.22 34.09 0.00 0.00 14590.20 9320.68 26991.12 00:17:45.548 =================================================================================================================== 00:17:45.548 Total : 8727.22 34.09 0.00 0.00 14590.20 9320.68 26991.12 00:17:45.548 00:17:45.548 Latency(us) 00:17:45.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.548 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:45.548 Nvme1n1 : 1.01 9618.17 37.57 0.00 0.00 13258.40 6262.33 24758.04 00:17:45.548 =================================================================================================================== 00:17:45.548 Total : 9618.17 37.57 0.00 0.00 13258.40 6262.33 24758.04 00:17:45.809 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1441494 00:17:45.809 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1441497 00:17:45.809 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1441499 00:17:45.809 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.809 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.809 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:45.809 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.809 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:45.809 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:45.809 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:46.066 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:46.066 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:46.066 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:46.066 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:46.066 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:46.066 rmmod nvme_tcp 00:17:46.066 rmmod nvme_fabrics 00:17:46.066 rmmod nvme_keyring 00:17:46.066 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:46.066 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:46.066 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:46.066 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1441353 ']' 00:17:46.066 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1441353 00:17:46.067 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 1441353 ']' 00:17:46.067 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 1441353 00:17:46.067 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:46.067 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:46.067 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1441353 00:17:46.067 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:46.067 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:46.067 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1441353' 00:17:46.067 killing process with pid 1441353 00:17:46.067 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 1441353 00:17:46.067 13:53:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 1441353 00:17:46.326 13:53:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:46.326 13:53:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:46.326 13:53:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:46.326 13:53:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.326 13:53:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:46.326 13:53:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.326 13:53:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.326 13:53:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.233 13:53:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:48.233 00:17:48.233 real 0m7.087s 00:17:48.233 user 0m16.373s 00:17:48.233 sys 0m3.517s 00:17:48.233 13:53:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:48.233 13:53:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:48.233 ************************************ 00:17:48.233 END TEST nvmf_bdev_io_wait 00:17:48.233 ************************************ 00:17:48.233 13:53:26 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:48.233 13:53:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:48.233 13:53:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:48.233 13:53:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:48.233 ************************************ 00:17:48.233 START TEST nvmf_queue_depth 00:17:48.233 ************************************ 00:17:48.233 13:53:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:48.492 * Looking for test storage... 00:17:48.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:48.492 13:53:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:48.493 13:53:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:50.398 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:50.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:50.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:50.398 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.398 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:50.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:17:50.399 00:17:50.399 --- 10.0.0.2 ping statistics --- 00:17:50.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.399 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:17:50.399 00:17:50.399 --- 10.0.0.1 ping statistics --- 00:17:50.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.399 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:50.399 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1443714 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1443714 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1443714 ']' 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:50.659 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.659 [2024-07-14 13:53:28.437746] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:50.659 [2024-07-14 13:53:28.437840] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.659 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.659 [2024-07-14 13:53:28.507480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.659 [2024-07-14 13:53:28.596637] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.659 [2024-07-14 13:53:28.596703] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.659 [2024-07-14 13:53:28.596729] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.659 [2024-07-14 13:53:28.596741] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.659 [2024-07-14 13:53:28.596753] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.659 [2024-07-14 13:53:28.596800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.916 [2024-07-14 13:53:28.738050] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.916 Malloc0 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.916 [2024-07-14 13:53:28.799651] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1443739 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1443739 /var/tmp/bdevperf.sock 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1443739 ']' 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:50.916 13:53:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:50.916 [2024-07-14 13:53:28.845074] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:50.916 [2024-07-14 13:53:28.845137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443739 ] 00:17:50.916 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.172 [2024-07-14 13:53:28.909222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.172 [2024-07-14 13:53:28.999995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.172 13:53:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:51.172 13:53:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:51.172 13:53:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:51.172 13:53:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.172 13:53:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.429 NVMe0n1 00:17:51.429 13:53:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.429 13:53:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:51.429 Running I/O for 10 seconds... 00:18:03.651 00:18:03.651 Latency(us) 00:18:03.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.651 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:03.651 Verification LBA range: start 0x0 length 0x4000 00:18:03.651 NVMe0n1 : 10.07 8652.03 33.80 0.00 0.00 117816.41 10728.49 71458.51 00:18:03.651 =================================================================================================================== 00:18:03.651 Total : 8652.03 33.80 0.00 0.00 117816.41 10728.49 71458.51 00:18:03.651 0 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1443739 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1443739 ']' 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1443739 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1443739 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1443739' 00:18:03.651 killing process with pid 1443739 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1443739 00:18:03.651 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.651 00:18:03.651 Latency(us) 00:18:03.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.651 =================================================================================================================== 00:18:03.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1443739 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:03.651 rmmod nvme_tcp 00:18:03.651 rmmod nvme_fabrics 00:18:03.651 rmmod nvme_keyring 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1443714 ']' 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1443714 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1443714 ']' 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1443714 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1443714 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1443714' 00:18:03.651 killing process with pid 1443714 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1443714 00:18:03.651 13:53:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1443714 00:18:03.651 13:53:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:03.651 13:53:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:03.651 13:53:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:03.651 13:53:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.651 13:53:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:03.651 13:53:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.651 13:53:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.651 13:53:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.219 13:53:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:04.219 00:18:04.219 real 0m15.910s 00:18:04.219 user 0m22.431s 00:18:04.219 sys 0m2.959s 00:18:04.219 13:53:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:04.219 13:53:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:04.219 ************************************ 00:18:04.219 END TEST nvmf_queue_depth 00:18:04.219 ************************************ 00:18:04.219 13:53:42 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:04.219 13:53:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:04.219 13:53:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:04.219 13:53:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:04.219 ************************************ 00:18:04.219 START TEST nvmf_target_multipath 00:18:04.219 ************************************ 00:18:04.219 13:53:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:04.219 * Looking for test storage... 00:18:04.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:04.219 13:53:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.219 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:04.219 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.219 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.219 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.219 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.219 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.219 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.219 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.478 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:04.479 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:04.479 13:53:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:04.479 13:53:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:06.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:06.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:06.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:06.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:06.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:18:06.382 00:18:06.382 --- 10.0.0.2 ping statistics --- 00:18:06.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.382 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:18:06.382 00:18:06.382 --- 10.0.0.1 ping statistics --- 00:18:06.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.382 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:06.382 only one NIC for nvmf test 00:18:06.382 13:53:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.383 rmmod nvme_tcp 00:18:06.383 rmmod nvme_fabrics 00:18:06.383 rmmod nvme_keyring 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.383 13:53:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.915 00:18:08.915 real 0m4.254s 00:18:08.915 user 0m0.805s 00:18:08.915 sys 0m1.426s 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:08.915 13:53:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:08.915 ************************************ 00:18:08.915 END TEST nvmf_target_multipath 00:18:08.915 ************************************ 00:18:08.915 13:53:46 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:08.915 13:53:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:08.915 13:53:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:08.915 13:53:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:08.915 ************************************ 00:18:08.915 START TEST nvmf_zcopy 00:18:08.915 ************************************ 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:08.915 * Looking for test storage... 00:18:08.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.915 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.916 13:53:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.847 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:10.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:10.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:10.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:10.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:10.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:18:10.848 00:18:10.848 --- 10.0.0.2 ping statistics --- 00:18:10.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.848 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:18:10.848 00:18:10.848 --- 10.0.0.1 ping statistics --- 00:18:10.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.848 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1448790 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1448790 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 1448790 ']' 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:10.848 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:10.848 [2024-07-14 13:53:48.611386] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:10.848 [2024-07-14 13:53:48.611467] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.848 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.848 [2024-07-14 13:53:48.680401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.848 [2024-07-14 13:53:48.776289] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.848 [2024-07-14 13:53:48.776338] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.848 [2024-07-14 13:53:48.776368] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.848 [2024-07-14 13:53:48.776379] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.848 [2024-07-14 13:53:48.776388] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.848 [2024-07-14 13:53:48.776428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.107 [2024-07-14 13:53:48.910949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.107 [2024-07-14 13:53:48.927135] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.107 malloc0 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:11.107 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:11.107 { 00:18:11.107 "params": { 00:18:11.107 "name": "Nvme$subsystem", 00:18:11.107 "trtype": "$TEST_TRANSPORT", 00:18:11.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:11.107 "adrfam": "ipv4", 00:18:11.107 "trsvcid": "$NVMF_PORT", 00:18:11.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:11.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:11.107 "hdgst": ${hdgst:-false}, 00:18:11.107 "ddgst": ${ddgst:-false} 00:18:11.107 }, 00:18:11.107 "method": "bdev_nvme_attach_controller" 00:18:11.108 } 00:18:11.108 EOF 00:18:11.108 )") 00:18:11.108 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:11.108 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:11.108 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:11.108 13:53:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:11.108 "params": { 00:18:11.108 "name": "Nvme1", 00:18:11.108 "trtype": "tcp", 00:18:11.108 "traddr": "10.0.0.2", 00:18:11.108 "adrfam": "ipv4", 00:18:11.108 "trsvcid": "4420", 00:18:11.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.108 "hdgst": false, 00:18:11.108 "ddgst": false 00:18:11.108 }, 00:18:11.108 "method": "bdev_nvme_attach_controller" 00:18:11.108 }' 00:18:11.108 [2024-07-14 13:53:48.999748] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:11.108 [2024-07-14 13:53:48.999823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448930 ] 00:18:11.108 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.108 [2024-07-14 13:53:49.062085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.367 [2024-07-14 13:53:49.152759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.627 Running I/O for 10 seconds... 00:18:21.613 00:18:21.613 Latency(us) 00:18:21.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.613 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:21.613 Verification LBA range: start 0x0 length 0x1000 00:18:21.613 Nvme1n1 : 10.01 5698.25 44.52 0.00 0.00 22400.13 415.67 33787.45 00:18:21.613 =================================================================================================================== 00:18:21.613 Total : 5698.25 44.52 0.00 0.00 22400.13 415.67 33787.45 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1450120 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:21.873 { 00:18:21.873 "params": { 00:18:21.873 "name": "Nvme$subsystem", 00:18:21.873 "trtype": "$TEST_TRANSPORT", 00:18:21.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:21.873 "adrfam": "ipv4", 00:18:21.873 "trsvcid": "$NVMF_PORT", 00:18:21.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:21.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:21.873 "hdgst": ${hdgst:-false}, 00:18:21.873 "ddgst": ${ddgst:-false} 00:18:21.873 }, 00:18:21.873 "method": "bdev_nvme_attach_controller" 00:18:21.873 } 00:18:21.873 EOF 00:18:21.873 )") 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:21.873 [2024-07-14 13:53:59.687362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.687411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:21.873 13:53:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:21.873 "params": { 00:18:21.873 "name": "Nvme1", 00:18:21.873 "trtype": "tcp", 00:18:21.873 "traddr": "10.0.0.2", 00:18:21.873 "adrfam": "ipv4", 00:18:21.873 "trsvcid": "4420", 00:18:21.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.873 "hdgst": false, 00:18:21.873 "ddgst": false 00:18:21.873 }, 00:18:21.873 "method": "bdev_nvme_attach_controller" 00:18:21.873 }' 00:18:21.873 [2024-07-14 13:53:59.695308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.695335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.703317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.703343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.711340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.711365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.719360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.719384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.723888] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:21.873 [2024-07-14 13:53:59.723958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450120 ] 00:18:21.873 [2024-07-14 13:53:59.727381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.727415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.735403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.735428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.743424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.743448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.751448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.751472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.873 [2024-07-14 13:53:59.759472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.759497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.767494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.767518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.775515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.775539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.783537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.783561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.790539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.873 [2024-07-14 13:53:59.791559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.791584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.799636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.799682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.807614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.807641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.815626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.815651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.823647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.823672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.831668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.831693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.839698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.839726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.873 [2024-07-14 13:53:59.847756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.873 [2024-07-14 13:53:59.847798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.855736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.855762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.863757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.863782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.871785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.871810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.879801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.879827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.884037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.132 [2024-07-14 13:53:59.887826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.887851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.895847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.895871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.903926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.903958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.911946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.911981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.919971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.920005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.927986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.928025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.936007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.936044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.944020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.944056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.952008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.952033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.960047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.960086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.968070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.968108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.976075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.976107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.984075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.984096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:53:59.992102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:53:59.992124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:54:00.000149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:54:00.000190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:54:00.008200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:54:00.008232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:54:00.016204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:54:00.016244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.132 [2024-07-14 13:54:00.024223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.132 [2024-07-14 13:54:00.024259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.133 [2024-07-14 13:54:00.032257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.133 [2024-07-14 13:54:00.032286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.133 [2024-07-14 13:54:00.040277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.133 [2024-07-14 13:54:00.040302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.133 [2024-07-14 13:54:00.048298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.133 [2024-07-14 13:54:00.048322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.133 [2024-07-14 13:54:00.056300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.133 [2024-07-14 13:54:00.056322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.133 [2024-07-14 13:54:00.064327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.133 [2024-07-14 13:54:00.064353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.133 [2024-07-14 13:54:00.072356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.133 [2024-07-14 13:54:00.072382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.133 [2024-07-14 13:54:00.080442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.133 [2024-07-14 13:54:00.080481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.133 [2024-07-14 13:54:00.088455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.133 [2024-07-14 13:54:00.088482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.133 [2024-07-14 13:54:00.096474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.133 [2024-07-14 13:54:00.096499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.133 [2024-07-14 13:54:00.104486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.133 [2024-07-14 13:54:00.104508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.133 [2024-07-14 13:54:00.112522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.133 [2024-07-14 13:54:00.112547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.120552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.120580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.128572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.128597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.136592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.136624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.144611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.144636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.152634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.152659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.160658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.160682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.168683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.168709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.176710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.176741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.184743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.184771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 Running I/O for 5 seconds... 00:18:22.392 [2024-07-14 13:54:00.192750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.192787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.208398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.208431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.220105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.220133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.232001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.232029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.245624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.245654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.256857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.256900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.268209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.268236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.281046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.281075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.292845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.292891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.301736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.301762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.313044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.313071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.325789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.325816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.337252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.337280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.346049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.346075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.357312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.357339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.392 [2024-07-14 13:54:00.369552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.392 [2024-07-14 13:54:00.369579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.379096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.379123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.389554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.389581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.399656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.399684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.409678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.409706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.420292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.420320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.430676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.430703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.441180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.441208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.451612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.451639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.462102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.462129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.472871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.472908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.483915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.483942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.494593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.494620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.505594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.505622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.516112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.516139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.526707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.526734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.537114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.537141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.549238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.549265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.558689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.558716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.570816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.570843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.580779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.580806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.591174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.591202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.601642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.601670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.613856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.613891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.652 [2024-07-14 13:54:00.623440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.652 [2024-07-14 13:54:00.623467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.633918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.633952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.644384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.644410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.656658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.656685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.666105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.666132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.677537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.677564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.690038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.690065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.701757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.701785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.710640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.710667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.721695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.721722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.733909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.733936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.742918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.742951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.755905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.755932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.765833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.765860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.776231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.910 [2024-07-14 13:54:00.776259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.910 [2024-07-14 13:54:00.786892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.911 [2024-07-14 13:54:00.786929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.911 [2024-07-14 13:54:00.797134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.911 [2024-07-14 13:54:00.797161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.911 [2024-07-14 13:54:00.807654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.911 [2024-07-14 13:54:00.807681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.911 [2024-07-14 13:54:00.818332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.911 [2024-07-14 13:54:00.818359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.911 [2024-07-14 13:54:00.828955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.911 [2024-07-14 13:54:00.828984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.911 [2024-07-14 13:54:00.841329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.911 [2024-07-14 13:54:00.841356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.911 [2024-07-14 13:54:00.851438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.911 [2024-07-14 13:54:00.851465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.911 [2024-07-14 13:54:00.862004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.911 [2024-07-14 13:54:00.862031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.911 [2024-07-14 13:54:00.872629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.911 [2024-07-14 13:54:00.872655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.911 [2024-07-14 13:54:00.883086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.911 [2024-07-14 13:54:00.883113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:00.895595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:00.895622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:00.905585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:00.905612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:00.916007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:00.916034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:00.928363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:00.928390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:00.938056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:00.938084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:00.948841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:00.948868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:00.961993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:00.962020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:00.972059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:00.972086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:00.982552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:00.982580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:00.993131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:00.993176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:01.003983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:01.004010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.170 [2024-07-14 13:54:01.014544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.170 [2024-07-14 13:54:01.014572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.171 [2024-07-14 13:54:01.025555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.171 [2024-07-14 13:54:01.025582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.171 [2024-07-14 13:54:01.038621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.171 [2024-07-14 13:54:01.038650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.171 [2024-07-14 13:54:01.049555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.171 [2024-07-14 13:54:01.049586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.171 [2024-07-14 13:54:01.061106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.171 [2024-07-14 13:54:01.061133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.171 [2024-07-14 13:54:01.072590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.171 [2024-07-14 13:54:01.072619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.171 [2024-07-14 13:54:01.084083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.171 [2024-07-14 13:54:01.084111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.171 [2024-07-14 13:54:01.095685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.171 [2024-07-14 13:54:01.095716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.171 [2024-07-14 13:54:01.107233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.171 [2024-07-14 13:54:01.107263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.171 [2024-07-14 13:54:01.118255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.171 [2024-07-14 13:54:01.118285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.171 [2024-07-14 13:54:01.131192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.171 [2024-07-14 13:54:01.131219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.171 [2024-07-14 13:54:01.142624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.171 [2024-07-14 13:54:01.142654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.429 [2024-07-14 13:54:01.153761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.429 [2024-07-14 13:54:01.153792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.429 [2024-07-14 13:54:01.165267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.429 [2024-07-14 13:54:01.165297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.429 [2024-07-14 13:54:01.177116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.429 [2024-07-14 13:54:01.177143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.429 [2024-07-14 13:54:01.188645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.429 [2024-07-14 13:54:01.188676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.429 [2024-07-14 13:54:01.200064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.429 [2024-07-14 13:54:01.200093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.212192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.212243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.223955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.223982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.236221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.236253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.247533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.247563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.259424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.259455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.270885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.270940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.282182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.282225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.293384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.293425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.304896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.304948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.316494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.316524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.327772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.327802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.339348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.339379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.351426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.351456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.363189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.363232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.374111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.374137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.385271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.385301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.396487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.396518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.430 [2024-07-14 13:54:01.407872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.430 [2024-07-14 13:54:01.407935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.688 [2024-07-14 13:54:01.419771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.688 [2024-07-14 13:54:01.419802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.688 [2024-07-14 13:54:01.431583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.688 [2024-07-14 13:54:01.431622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.688 [2024-07-14 13:54:01.442856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.688 [2024-07-14 13:54:01.442899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.688 [2024-07-14 13:54:01.454596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.688 [2024-07-14 13:54:01.454626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.466037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.466064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.479276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.479306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.490346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.490377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.502037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.502064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.513623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.513652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.526959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.526986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.538352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.538382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.549983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.550010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.561272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.561303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.572797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.572827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.584454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.584483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.596081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.596111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.608072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.608102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.620066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.620093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.631697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.631727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.643548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.643578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.655221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.655251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.689 [2024-07-14 13:54:01.668541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.689 [2024-07-14 13:54:01.668570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.679096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.679126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.691488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.691518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.703132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.703159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.714343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.714373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.725410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.725440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.736758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.736788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.748292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.748322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.759312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.759342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.770482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.770512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.781703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.781734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.792820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.792850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.803957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.803985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.816966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.816993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.827701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.827731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.838978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.839011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.852617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.852647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.863598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.863628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.874762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.874792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.887551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.887581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.897604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.897634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.909600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.909630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.948 [2024-07-14 13:54:01.921083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.948 [2024-07-14 13:54:01.921113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.206 [2024-07-14 13:54:01.934146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.206 [2024-07-14 13:54:01.934176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.206 [2024-07-14 13:54:01.944544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.206 [2024-07-14 13:54:01.944574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.206 [2024-07-14 13:54:01.956001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.206 [2024-07-14 13:54:01.956028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.206 [2024-07-14 13:54:01.967005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.206 [2024-07-14 13:54:01.967033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.206 [2024-07-14 13:54:01.978193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.206 [2024-07-14 13:54:01.978223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.206 [2024-07-14 13:54:01.989530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.206 [2024-07-14 13:54:01.989574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.206 [2024-07-14 13:54:02.001285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.001316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.012801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.012831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.023803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.023833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.034317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.034344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.045261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.045289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.057981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.058009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.067964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.067992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.079434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.079464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.090718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.090748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.102183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.102227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.113491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.113521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.124885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.124932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.136397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.136427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.148446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.148476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.161984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.162011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.172588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.172618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.207 [2024-07-14 13:54:02.183771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.207 [2024-07-14 13:54:02.183800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.196856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.196897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.207314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.207344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.219028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.219054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.230261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.230291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.241499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.241528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.253001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.253029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.266382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.266412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.276808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.276838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.288297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.288326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.299535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.299565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.311354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.311394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.322794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.322825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.334460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.334490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.346278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.346308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.358127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.358158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.369359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.369390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.381131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.381175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.392956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.392984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.404706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.404736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.418493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.418522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.429615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.429645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.465 [2024-07-14 13:54:02.441254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.465 [2024-07-14 13:54:02.441285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.453432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.453462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.464501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.464531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.475774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.475804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.487105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.487132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.500443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.500472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.511072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.511103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.523425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.523463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.535285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.535314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.546671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.546700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.560021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.560047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.571347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.571377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.582505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.582534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.595418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.595447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.605827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.605857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.617492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.617521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.628886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.628929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.640708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.640738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.652326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.652356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.663545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.663575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.675248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.675278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.686546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.686576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.725 [2024-07-14 13:54:02.698463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.725 [2024-07-14 13:54:02.698493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.710009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.710036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.723306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.723336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.733887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.733933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.745604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.745644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.757345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.757375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.771166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.771193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.782062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.782090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.793224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.793254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.806196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.806226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.816884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.816928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.829380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.829410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.840957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.840984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.852089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.852116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.863442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.863472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.876891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.876937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.888306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.888336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.899829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.899859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.911160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.911187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.922559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.922588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.934171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.934201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.945442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.945472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.985 [2024-07-14 13:54:02.957334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.985 [2024-07-14 13:54:02.957364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:02.968595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:02.968637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:02.980150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:02.980195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:02.991301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:02.991331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.004570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.004600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.014726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.014755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.026979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.027008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.038440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.038469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.051475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.051505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.061804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.061834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.073567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.073596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.085116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.085143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.096952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.096979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.108599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.108629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.120421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.120451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.133762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.133792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.144753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.144782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.156226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.156256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.167125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.167152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.178834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.178863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.190522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.190567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.201900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.201945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.213489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.213519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.224859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.224896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.235940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.235977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.278 [2024-07-14 13:54:03.247734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.278 [2024-07-14 13:54:03.247773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.259091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.259120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.270423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.270453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.282519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.282549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.296060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.296088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.307243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.307273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.318995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.319022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.330874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.330925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.344315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.344345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.355221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.355251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.366174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.366217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.377854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.377893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.389165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.389209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.402653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.402682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.413596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.413626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.425146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.425173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.436830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.436860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.448254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.448281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.459896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.459941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.472955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.472983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.483735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.483764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.495452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.495482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.506564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.506595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.546 [2024-07-14 13:54:03.518287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.546 [2024-07-14 13:54:03.518318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.529579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.529625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.541069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.541100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.552650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.552681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.564414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.564445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.576428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.576458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.588104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.588134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.601554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.601584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.612815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.612845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.624339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.624368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.636115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.636146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.647938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.647965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.659079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.659109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.670552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.670582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.682210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.682240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.693676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.693706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.705897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.705927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.717391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.717418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.729754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.729781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.739891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.739917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.750386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.750413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.760799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.760826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.773374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.773401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.806 [2024-07-14 13:54:03.783307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.806 [2024-07-14 13:54:03.783334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.793520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.066 [2024-07-14 13:54:03.793549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.803763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.066 [2024-07-14 13:54:03.803790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.813926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.066 [2024-07-14 13:54:03.813953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.824242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.066 [2024-07-14 13:54:03.824268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.834574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.066 [2024-07-14 13:54:03.834600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.845128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.066 [2024-07-14 13:54:03.845155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.857571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.066 [2024-07-14 13:54:03.857598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.867921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.066 [2024-07-14 13:54:03.867950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.878498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.066 [2024-07-14 13:54:03.878525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.891137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.066 [2024-07-14 13:54:03.891164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.901550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.066 [2024-07-14 13:54:03.901576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.066 [2024-07-14 13:54:03.912049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:03.912076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:03.922994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:03.923020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:03.933751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:03.933777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:03.946003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:03.946030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:03.955583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:03.955611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:03.966652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:03.966679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:03.977248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:03.977275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:03.989988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:03.990015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:04.000149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:04.000176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:04.011063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:04.011089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:04.023777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:04.023804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:04.034122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:04.034150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.067 [2024-07-14 13:54:04.044622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.067 [2024-07-14 13:54:04.044649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.055182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.055210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.066124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.066151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.076291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.076318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.086554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.086581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.096889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.096916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.107270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.107297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.118018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.118045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.128646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.128671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.141048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.141075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.151276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.151303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.161705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.161732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.172375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.172401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.182767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.182794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.193367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.193393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.203786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.203813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.214422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.214449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.224924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.224952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.235341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.235368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.245561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.245599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.256026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.256053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.266658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.266685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.277257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.277284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.289596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.289623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.327 [2024-07-14 13:54:04.299214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.327 [2024-07-14 13:54:04.299242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.309555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.309584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.320097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.320124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.332452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.332479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.342369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.342396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.353954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.353981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.366152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.366179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.375726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.375753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.386522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.386549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.397734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.397761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.408949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.408977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.420284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.420314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.431568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.431598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.443368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.443399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.457011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.457046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.467958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.467986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.479166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.479196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.492404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.492434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.503236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.503266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.514574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.514604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.525518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.525553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.536551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.536581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.547613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.547643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.587 [2024-07-14 13:54:04.559020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.587 [2024-07-14 13:54:04.559048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.570555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.570586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.582182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.582212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.593827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.593858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.608764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.608796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.620066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.620094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.631632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.631662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.642837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.642867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.654106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.654134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.667642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.667673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.678281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.678321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.689718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.689748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.701295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.701325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.713661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.713691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.725269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.725298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.736780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.736810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.748278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.748308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.759667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.846 [2024-07-14 13:54:04.759696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.846 [2024-07-14 13:54:04.771444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.847 [2024-07-14 13:54:04.771474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.847 [2024-07-14 13:54:04.782574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.847 [2024-07-14 13:54:04.782603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.847 [2024-07-14 13:54:04.794047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.847 [2024-07-14 13:54:04.794075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.847 [2024-07-14 13:54:04.805064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.847 [2024-07-14 13:54:04.805091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.847 [2024-07-14 13:54:04.816613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.847 [2024-07-14 13:54:04.816644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.847 [2024-07-14 13:54:04.827738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.847 [2024-07-14 13:54:04.827768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.839275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.839305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.850543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.850573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.861914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.861962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.875426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.875457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.886491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.886521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.898217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.898257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.909600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.909630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.921034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.921061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.932128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.932155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.943077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.943104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.954360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.954390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.965459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.965489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.977251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.977281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.988519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.988549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:04.999677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:04.999707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:05.011417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:05.011447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:05.022734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:05.022764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:05.033822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:05.033852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:05.046852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:05.046892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:05.057241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:05.057272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:05.068783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:05.068814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.106 [2024-07-14 13:54:05.080677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.106 [2024-07-14 13:54:05.080706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.092706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.092736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.104212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.104242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.117657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.117687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.129095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.129122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.140732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.140762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.151896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.151940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.163378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.163408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.176898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.176941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.187569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.187599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.199327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.199357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.208819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.208847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 00:18:27.366 Latency(us) 00:18:27.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.366 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:27.366 Nvme1n1 : 5.01 11357.79 88.73 0.00 0.00 11254.77 4490.43 25437.68 00:18:27.366 =================================================================================================================== 00:18:27.366 Total : 11357.79 88.73 0.00 0.00 11254.77 4490.43 25437.68 00:18:27.366 [2024-07-14 13:54:05.214453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.214481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.222470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.222498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.230518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.230556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.238571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.238622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.246596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.246650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.254613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.254664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.262641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.262691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.270659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.270709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.278681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.278731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.366 [2024-07-14 13:54:05.286708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.366 [2024-07-14 13:54:05.286759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.367 [2024-07-14 13:54:05.294720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.367 [2024-07-14 13:54:05.294769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.367 [2024-07-14 13:54:05.302745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.367 [2024-07-14 13:54:05.302795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.367 [2024-07-14 13:54:05.310769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.367 [2024-07-14 13:54:05.310820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.367 [2024-07-14 13:54:05.318808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.367 [2024-07-14 13:54:05.318860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.367 [2024-07-14 13:54:05.326821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.367 [2024-07-14 13:54:05.326870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.367 [2024-07-14 13:54:05.334847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.367 [2024-07-14 13:54:05.334906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.367 [2024-07-14 13:54:05.342872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.367 [2024-07-14 13:54:05.342932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.350903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.350966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.358862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.358898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.366883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.366911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.374971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.375021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.383009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.383060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.391010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.391054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.398982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.399006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.407043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.407083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.415069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.415118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.423095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.423148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.431055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.431077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.439075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.439097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 [2024-07-14 13:54:05.447095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.627 [2024-07-14 13:54:05.447118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1450120) - No such process 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1450120 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:27.627 delay0 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.627 13:54:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:27.627 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.627 [2024-07-14 13:54:05.525241] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:35.743 Initializing NVMe Controllers 00:18:35.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:35.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:35.743 Initialization complete. Launching workers. 00:18:35.743 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 263, failed: 16505 00:18:35.743 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 16673, failed to submit 95 00:18:35.743 success 16573, unsuccess 100, failed 0 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.743 rmmod nvme_tcp 00:18:35.743 rmmod nvme_fabrics 00:18:35.743 rmmod nvme_keyring 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1448790 ']' 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1448790 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 1448790 ']' 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 1448790 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1448790 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1448790' 00:18:35.743 killing process with pid 1448790 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 1448790 00:18:35.743 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 1448790 00:18:35.744 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:35.744 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:35.744 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:35.744 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.744 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.744 13:54:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.744 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.744 13:54:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.125 13:54:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:37.125 00:18:37.125 real 0m28.525s 00:18:37.125 user 0m41.984s 00:18:37.125 sys 0m8.940s 00:18:37.125 13:54:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:37.125 13:54:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.125 ************************************ 00:18:37.125 END TEST nvmf_zcopy 00:18:37.125 ************************************ 00:18:37.125 13:54:14 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:37.125 13:54:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:37.125 13:54:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:37.125 13:54:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:37.125 ************************************ 00:18:37.125 START TEST nvmf_nmic 00:18:37.125 ************************************ 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:37.125 * Looking for test storage... 00:18:37.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:37.125 13:54:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.661 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:39.662 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:39.662 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:39.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:39.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:39.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:18:39.662 00:18:39.662 --- 10.0.0.2 ping statistics --- 00:18:39.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.662 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:18:39.662 00:18:39.662 --- 10.0.0.1 ping statistics --- 00:18:39.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.662 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1454251 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1454251 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 1454251 ']' 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:39.662 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.662 [2024-07-14 13:54:17.297056] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:39.663 [2024-07-14 13:54:17.297131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.663 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.663 [2024-07-14 13:54:17.362237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.663 [2024-07-14 13:54:17.448146] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.663 [2024-07-14 13:54:17.448226] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.663 [2024-07-14 13:54:17.448240] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.663 [2024-07-14 13:54:17.448254] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.663 [2024-07-14 13:54:17.448263] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.663 [2024-07-14 13:54:17.448388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.663 [2024-07-14 13:54:17.448455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.663 [2024-07-14 13:54:17.448502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.663 [2024-07-14 13:54:17.448504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.663 [2024-07-14 13:54:17.585524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.663 Malloc0 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.663 [2024-07-14 13:54:17.636977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:39.663 test case1: single bdev can't be used in multiple subsystems 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.663 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.923 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.923 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:39.923 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.923 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.923 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.923 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:39.923 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:39.923 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.924 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.924 [2024-07-14 13:54:17.660808] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:39.924 [2024-07-14 13:54:17.660836] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:39.924 [2024-07-14 13:54:17.660866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.924 request: 00:18:39.924 { 00:18:39.924 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:39.924 "namespace": { 00:18:39.924 "bdev_name": "Malloc0", 00:18:39.924 "no_auto_visible": false 00:18:39.924 }, 00:18:39.924 "method": "nvmf_subsystem_add_ns", 00:18:39.924 "req_id": 1 00:18:39.924 } 00:18:39.924 Got JSON-RPC error response 00:18:39.924 response: 00:18:39.924 { 00:18:39.924 "code": -32602, 00:18:39.924 "message": "Invalid parameters" 00:18:39.924 } 00:18:39.924 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:39.924 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:39.924 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:39.924 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:39.924 Adding namespace failed - expected result. 00:18:39.924 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:39.924 test case2: host connect to nvmf target in multiple paths 00:18:39.924 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:39.924 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.924 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.924 [2024-07-14 13:54:17.668954] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:39.924 13:54:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.924 13:54:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:40.494 13:54:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:41.061 13:54:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:41.061 13:54:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:41.061 13:54:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.061 13:54:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:41.061 13:54:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:43.596 13:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:43.596 13:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:43.596 13:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:43.596 13:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:43.596 13:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.596 13:54:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:43.596 13:54:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:43.596 [global] 00:18:43.596 thread=1 00:18:43.596 invalidate=1 00:18:43.596 rw=write 00:18:43.596 time_based=1 00:18:43.596 runtime=1 00:18:43.596 ioengine=libaio 00:18:43.596 direct=1 00:18:43.596 bs=4096 00:18:43.596 iodepth=1 00:18:43.596 norandommap=0 00:18:43.596 numjobs=1 00:18:43.596 00:18:43.596 verify_dump=1 00:18:43.596 verify_backlog=512 00:18:43.596 verify_state_save=0 00:18:43.596 do_verify=1 00:18:43.596 verify=crc32c-intel 00:18:43.596 [job0] 00:18:43.596 filename=/dev/nvme0n1 00:18:43.596 Could not set queue depth (nvme0n1) 00:18:43.596 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.596 fio-3.35 00:18:43.596 Starting 1 thread 00:18:44.553 00:18:44.553 job0: (groupid=0, jobs=1): err= 0: pid=1454768: Sun Jul 14 13:54:22 2024 00:18:44.553 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:18:44.553 slat (nsec): min=12570, max=34755, avg=22505.05, stdev=8864.10 00:18:44.553 clat (usec): min=40670, max=43952, avg=41097.77, stdev=658.96 00:18:44.553 lat (usec): min=40689, max=43968, avg=41120.28, stdev=657.31 00:18:44.553 clat percentiles (usec): 00:18:44.553 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:44.553 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:44.553 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:44.553 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:18:44.553 | 99.99th=[43779] 00:18:44.553 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:18:44.553 slat (usec): min=13, max=30687, avg=82.43, stdev=1355.24 00:18:44.553 clat (usec): min=149, max=278, avg=182.53, stdev=13.36 00:18:44.553 lat (usec): min=166, max=30966, avg=264.96, stdev=1359.54 00:18:44.553 clat percentiles (usec): 00:18:44.554 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 174], 00:18:44.554 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 184], 00:18:44.554 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 194], 95.00th=[ 202], 00:18:44.554 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 277], 99.95th=[ 277], 00:18:44.554 | 99.99th=[ 277] 00:18:44.554 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.554 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.554 lat (usec) : 250=95.68%, 500=0.38% 00:18:44.554 lat (msec) : 50=3.94% 00:18:44.554 cpu : usr=0.60%, sys=1.70%, ctx=535, majf=0, minf=2 00:18:44.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.554 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.554 00:18:44.554 Run status group 0 (all jobs): 00:18:44.554 READ: bw=83.8KiB/s (85.8kB/s), 83.8KiB/s-83.8KiB/s (85.8kB/s-85.8kB/s), io=84.0KiB (86.0kB), run=1002-1002msec 00:18:44.554 WRITE: bw=2044KiB/s (2093kB/s), 2044KiB/s-2044KiB/s (2093kB/s-2093kB/s), io=2048KiB (2097kB), run=1002-1002msec 00:18:44.554 00:18:44.554 Disk stats (read/write): 00:18:44.554 nvme0n1: ios=44/512, merge=0/0, ticks=1723/83, in_queue=1806, util=98.70% 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:44.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:44.554 rmmod nvme_tcp 00:18:44.554 rmmod nvme_fabrics 00:18:44.554 rmmod nvme_keyring 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1454251 ']' 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1454251 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 1454251 ']' 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 1454251 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:44.554 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1454251 00:18:44.812 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:44.812 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:44.812 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1454251' 00:18:44.812 killing process with pid 1454251 00:18:44.812 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 1454251 00:18:44.812 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 1454251 00:18:45.072 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.072 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.072 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.072 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.072 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.072 13:54:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.072 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.072 13:54:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.978 13:54:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.978 00:18:46.978 real 0m9.831s 00:18:46.978 user 0m22.231s 00:18:46.978 sys 0m2.256s 00:18:46.978 13:54:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:46.978 13:54:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:46.978 ************************************ 00:18:46.978 END TEST nvmf_nmic 00:18:46.978 ************************************ 00:18:46.978 13:54:24 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:46.978 13:54:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:46.978 13:54:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:46.978 13:54:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:46.978 ************************************ 00:18:46.978 START TEST nvmf_fio_target 00:18:46.978 ************************************ 00:18:46.978 13:54:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:46.978 * Looking for test storage... 00:18:46.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.978 13:54:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.978 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.236 13:54:24 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:47.237 13:54:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:49.142 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:49.143 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:49.143 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:49.143 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:49.143 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:49.143 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:49.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:18:49.402 00:18:49.402 --- 10.0.0.2 ping statistics --- 00:18:49.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.402 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:18:49.402 00:18:49.402 --- 10.0.0.1 ping statistics --- 00:18:49.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.402 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.402 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1456841 00:18:49.403 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.403 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1456841 00:18:49.403 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 1456841 ']' 00:18:49.403 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.403 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:49.403 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.403 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:49.403 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.403 [2024-07-14 13:54:27.261994] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:49.403 [2024-07-14 13:54:27.262073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.403 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.403 [2024-07-14 13:54:27.332589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.661 [2024-07-14 13:54:27.431184] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.661 [2024-07-14 13:54:27.431251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.661 [2024-07-14 13:54:27.431268] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.661 [2024-07-14 13:54:27.431282] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.661 [2024-07-14 13:54:27.431293] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.661 [2024-07-14 13:54:27.431351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.661 [2024-07-14 13:54:27.431408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.661 [2024-07-14 13:54:27.431468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.661 [2024-07-14 13:54:27.431471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.661 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:49.661 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:49.661 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:49.661 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.661 13:54:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.661 13:54:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.661 13:54:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:49.919 [2024-07-14 13:54:27.802377] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.919 13:54:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:50.177 13:54:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:50.177 13:54:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:50.434 13:54:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:50.434 13:54:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:50.692 13:54:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:50.692 13:54:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:50.950 13:54:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:50.950 13:54:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:51.209 13:54:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:51.467 13:54:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:51.467 13:54:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:51.724 13:54:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:51.724 13:54:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:51.982 13:54:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:51.982 13:54:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:52.239 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:52.497 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:52.497 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:52.754 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:52.754 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:53.012 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.270 [2024-07-14 13:54:31.145248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.270 13:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:53.528 13:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:53.785 13:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:54.722 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:54.722 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:54.722 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.722 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:54.722 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:54.722 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:56.706 13:54:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:56.706 13:54:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:56.706 13:54:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:56.706 13:54:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:56.706 13:54:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.706 13:54:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:56.706 13:54:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:56.706 [global] 00:18:56.706 thread=1 00:18:56.706 invalidate=1 00:18:56.706 rw=write 00:18:56.706 time_based=1 00:18:56.706 runtime=1 00:18:56.706 ioengine=libaio 00:18:56.706 direct=1 00:18:56.706 bs=4096 00:18:56.706 iodepth=1 00:18:56.706 norandommap=0 00:18:56.706 numjobs=1 00:18:56.706 00:18:56.706 verify_dump=1 00:18:56.706 verify_backlog=512 00:18:56.706 verify_state_save=0 00:18:56.706 do_verify=1 00:18:56.706 verify=crc32c-intel 00:18:56.706 [job0] 00:18:56.706 filename=/dev/nvme0n1 00:18:56.706 [job1] 00:18:56.706 filename=/dev/nvme0n2 00:18:56.706 [job2] 00:18:56.706 filename=/dev/nvme0n3 00:18:56.706 [job3] 00:18:56.706 filename=/dev/nvme0n4 00:18:56.706 Could not set queue depth (nvme0n1) 00:18:56.706 Could not set queue depth (nvme0n2) 00:18:56.706 Could not set queue depth (nvme0n3) 00:18:56.706 Could not set queue depth (nvme0n4) 00:18:56.706 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.706 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.706 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.706 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.706 fio-3.35 00:18:56.706 Starting 4 threads 00:18:58.084 00:18:58.084 job0: (groupid=0, jobs=1): err= 0: pid=1457906: Sun Jul 14 13:54:35 2024 00:18:58.084 read: IOPS=19, BW=78.9KiB/s (80.8kB/s)(80.0KiB/1014msec) 00:18:58.084 slat (nsec): min=9070, max=34989, avg=18175.95, stdev=7212.66 00:18:58.084 clat (usec): min=40882, max=41130, avg=40983.59, stdev=55.07 00:18:58.084 lat (usec): min=40916, max=41139, avg=41001.77, stdev=51.53 00:18:58.084 clat percentiles (usec): 00:18:58.084 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:58.084 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:58.084 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:58.084 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:58.084 | 99.99th=[41157] 00:18:58.084 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:18:58.084 slat (usec): min=9, max=40520, avg=160.06, stdev=2239.51 00:18:58.084 clat (usec): min=147, max=371, avg=212.04, stdev=30.38 00:18:58.084 lat (usec): min=156, max=40759, avg=372.10, stdev=2241.21 00:18:58.084 clat percentiles (usec): 00:18:58.084 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 178], 20.00th=[ 186], 00:18:58.084 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 215], 60.00th=[ 225], 00:18:58.084 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 260], 00:18:58.084 | 99.00th=[ 293], 99.50th=[ 326], 99.90th=[ 371], 99.95th=[ 371], 00:18:58.084 | 99.99th=[ 371] 00:18:58.084 bw ( KiB/s): min= 4096, max= 4096, per=41.28%, avg=4096.00, stdev= 0.00, samples=1 00:18:58.084 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:58.084 lat (usec) : 250=89.47%, 500=6.77% 00:18:58.084 lat (msec) : 50=3.76% 00:18:58.084 cpu : usr=1.09%, sys=0.99%, ctx=535, majf=0, minf=1 00:18:58.084 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.084 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.084 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.084 job1: (groupid=0, jobs=1): err= 0: pid=1457908: Sun Jul 14 13:54:35 2024 00:18:58.084 read: IOPS=986, BW=3946KiB/s (4040kB/s)(4072KiB/1032msec) 00:18:58.084 slat (nsec): min=6441, max=33328, avg=9124.50, stdev=3922.05 00:18:58.084 clat (usec): min=204, max=41976, avg=765.55, stdev=4594.63 00:18:58.084 lat (usec): min=211, max=41992, avg=774.67, stdev=4595.90 00:18:58.084 clat percentiles (usec): 00:18:58.084 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:18:58.084 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 245], 00:18:58.084 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:18:58.084 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:58.084 | 99.99th=[42206] 00:18:58.084 write: IOPS=992, BW=3969KiB/s (4064kB/s)(4096KiB/1032msec); 0 zone resets 00:18:58.084 slat (nsec): min=7347, max=55604, avg=13565.78, stdev=7371.65 00:18:58.084 clat (usec): min=142, max=439, avg=216.59, stdev=42.87 00:18:58.084 lat (usec): min=158, max=447, avg=230.15, stdev=45.24 00:18:58.084 clat percentiles (usec): 00:18:58.084 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 186], 00:18:58.085 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 215], 00:18:58.085 | 70.00th=[ 235], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 297], 00:18:58.085 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 375], 99.95th=[ 441], 00:18:58.085 | 99.99th=[ 441] 00:18:58.085 bw ( KiB/s): min= 8192, max= 8192, per=82.56%, avg=8192.00, stdev= 0.00, samples=1 00:18:58.085 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:58.085 lat (usec) : 250=70.32%, 500=29.04% 00:18:58.085 lat (msec) : 50=0.64% 00:18:58.085 cpu : usr=1.55%, sys=3.01%, ctx=2043, majf=0, minf=2 00:18:58.085 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.085 issued rwts: total=1018,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.085 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.085 job2: (groupid=0, jobs=1): err= 0: pid=1457910: Sun Jul 14 13:54:35 2024 00:18:58.085 read: IOPS=22, BW=91.1KiB/s (93.3kB/s)(92.0KiB/1010msec) 00:18:58.085 slat (nsec): min=14755, max=35453, avg=19766.57, stdev=7766.49 00:18:58.085 clat (usec): min=441, max=41984, avg=37557.71, stdev=11715.95 00:18:58.085 lat (usec): min=476, max=41999, avg=37577.48, stdev=11711.12 00:18:58.085 clat percentiles (usec): 00:18:58.085 | 1.00th=[ 441], 5.00th=[ 445], 10.00th=[40633], 20.00th=[41157], 00:18:58.085 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:58.085 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:18:58.085 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:58.085 | 99.99th=[42206] 00:18:58.085 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:18:58.085 slat (nsec): min=8015, max=54108, avg=20316.57, stdev=8994.31 00:18:58.085 clat (usec): min=158, max=444, avg=257.87, stdev=52.16 00:18:58.085 lat (usec): min=170, max=484, avg=278.18, stdev=51.92 00:18:58.085 clat percentiles (usec): 00:18:58.085 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 208], 00:18:58.085 | 30.00th=[ 221], 40.00th=[ 241], 50.00th=[ 260], 60.00th=[ 269], 00:18:58.085 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 359], 00:18:58.085 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 445], 99.95th=[ 445], 00:18:58.085 | 99.99th=[ 445] 00:18:58.085 bw ( KiB/s): min= 4096, max= 4096, per=41.28%, avg=4096.00, stdev= 0.00, samples=1 00:18:58.085 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:58.085 lat (usec) : 250=41.68%, 500=54.39% 00:18:58.085 lat (msec) : 50=3.93% 00:18:58.085 cpu : usr=0.89%, sys=1.09%, ctx=536, majf=0, minf=1 00:18:58.085 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.085 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.085 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.085 job3: (groupid=0, jobs=1): err= 0: pid=1457911: Sun Jul 14 13:54:35 2024 00:18:58.085 read: IOPS=20, BW=82.0KiB/s (83.9kB/s)(84.0KiB/1025msec) 00:18:58.085 slat (nsec): min=15783, max=35173, avg=18869.24, stdev=6604.90 00:18:58.085 clat (usec): min=40948, max=42277, avg=41703.56, stdev=477.71 00:18:58.085 lat (usec): min=40964, max=42293, avg=41722.43, stdev=479.04 00:18:58.085 clat percentiles (usec): 00:18:58.085 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:58.085 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:58.085 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:58.085 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:58.085 | 99.99th=[42206] 00:18:58.085 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:18:58.085 slat (usec): min=7, max=181, avg=21.02, stdev=12.48 00:18:58.085 clat (usec): min=164, max=1129, avg=263.54, stdev=82.87 00:18:58.085 lat (usec): min=175, max=1140, avg=284.56, stdev=84.87 00:18:58.085 clat percentiles (usec): 00:18:58.085 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 206], 00:18:58.085 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 239], 60.00th=[ 265], 00:18:58.085 | 70.00th=[ 289], 80.00th=[ 322], 90.00th=[ 363], 95.00th=[ 392], 00:18:58.085 | 99.00th=[ 465], 99.50th=[ 742], 99.90th=[ 1123], 99.95th=[ 1123], 00:18:58.085 | 99.99th=[ 1123] 00:18:58.085 bw ( KiB/s): min= 4096, max= 4096, per=41.28%, avg=4096.00, stdev= 0.00, samples=1 00:18:58.085 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:58.085 lat (usec) : 250=53.66%, 500=41.65%, 750=0.38%, 1000=0.19% 00:18:58.085 lat (msec) : 2=0.19%, 50=3.94% 00:18:58.085 cpu : usr=0.49%, sys=0.98%, ctx=536, majf=0, minf=1 00:18:58.085 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:58.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.085 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.085 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:58.085 00:18:58.085 Run status group 0 (all jobs): 00:18:58.085 READ: bw=4194KiB/s (4294kB/s), 78.9KiB/s-3946KiB/s (80.8kB/s-4040kB/s), io=4328KiB (4432kB), run=1010-1032msec 00:18:58.085 WRITE: bw=9922KiB/s (10.2MB/s), 1998KiB/s-3969KiB/s (2046kB/s-4064kB/s), io=10.0MiB (10.5MB), run=1010-1032msec 00:18:58.085 00:18:58.085 Disk stats (read/write): 00:18:58.085 nvme0n1: ios=65/512, merge=0/0, ticks=1035/100, in_queue=1135, util=86.27% 00:18:58.085 nvme0n2: ios=1063/1024, merge=0/0, ticks=638/221, in_queue=859, util=90.95% 00:18:58.085 nvme0n3: ios=74/512, merge=0/0, ticks=782/123, in_queue=905, util=95.08% 00:18:58.085 nvme0n4: ios=73/512, merge=0/0, ticks=766/131, in_queue=897, util=94.20% 00:18:58.085 13:54:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:58.085 [global] 00:18:58.085 thread=1 00:18:58.085 invalidate=1 00:18:58.085 rw=randwrite 00:18:58.085 time_based=1 00:18:58.085 runtime=1 00:18:58.085 ioengine=libaio 00:18:58.085 direct=1 00:18:58.085 bs=4096 00:18:58.085 iodepth=1 00:18:58.085 norandommap=0 00:18:58.085 numjobs=1 00:18:58.085 00:18:58.085 verify_dump=1 00:18:58.085 verify_backlog=512 00:18:58.085 verify_state_save=0 00:18:58.085 do_verify=1 00:18:58.085 verify=crc32c-intel 00:18:58.085 [job0] 00:18:58.085 filename=/dev/nvme0n1 00:18:58.085 [job1] 00:18:58.085 filename=/dev/nvme0n2 00:18:58.085 [job2] 00:18:58.085 filename=/dev/nvme0n3 00:18:58.085 [job3] 00:18:58.085 filename=/dev/nvme0n4 00:18:58.085 Could not set queue depth (nvme0n1) 00:18:58.085 Could not set queue depth (nvme0n2) 00:18:58.085 Could not set queue depth (nvme0n3) 00:18:58.085 Could not set queue depth (nvme0n4) 00:18:58.085 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.085 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.085 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.085 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.085 fio-3.35 00:18:58.085 Starting 4 threads 00:18:59.461 00:18:59.461 job0: (groupid=0, jobs=1): err= 0: pid=1458144: Sun Jul 14 13:54:37 2024 00:18:59.461 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:18:59.461 slat (nsec): min=12786, max=22880, avg=15694.95, stdev=2398.03 00:18:59.461 clat (usec): min=40370, max=41967, avg=40996.89, stdev=253.84 00:18:59.461 lat (usec): min=40388, max=41990, avg=41012.59, stdev=254.79 00:18:59.461 clat percentiles (usec): 00:18:59.461 | 1.00th=[40109], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:59.461 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:59.461 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:59.461 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:59.461 | 99.99th=[42206] 00:18:59.461 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:18:59.461 slat (nsec): min=9264, max=43260, avg=21466.97, stdev=3938.07 00:18:59.461 clat (usec): min=161, max=271, avg=202.23, stdev=12.29 00:18:59.461 lat (usec): min=170, max=300, avg=223.69, stdev=13.69 00:18:59.461 clat percentiles (usec): 00:18:59.461 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:18:59.461 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 204], 00:18:59.461 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 217], 95.00th=[ 223], 00:18:59.461 | 99.00th=[ 231], 99.50th=[ 245], 99.90th=[ 273], 99.95th=[ 273], 00:18:59.461 | 99.99th=[ 273] 00:18:59.461 bw ( KiB/s): min= 4096, max= 4096, per=26.22%, avg=4096.00, stdev= 0.00, samples=1 00:18:59.461 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:59.461 lat (usec) : 250=95.69%, 500=0.19% 00:18:59.461 lat (msec) : 50=4.12% 00:18:59.461 cpu : usr=0.79%, sys=1.37%, ctx=535, majf=0, minf=1 00:18:59.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.461 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.461 job1: (groupid=0, jobs=1): err= 0: pid=1458145: Sun Jul 14 13:54:37 2024 00:18:59.461 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:18:59.461 slat (nsec): min=9078, max=14685, avg=13797.00, stdev=1096.45 00:18:59.461 clat (usec): min=40933, max=41990, avg=41115.28, stdev=350.14 00:18:59.461 lat (usec): min=40943, max=42004, avg=41129.07, stdev=350.28 00:18:59.461 clat percentiles (usec): 00:18:59.461 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:59.461 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:59.461 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:18:59.461 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:59.461 | 99.99th=[42206] 00:18:59.461 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:18:59.461 slat (nsec): min=7961, max=53242, avg=10417.10, stdev=2502.55 00:18:59.461 clat (usec): min=135, max=526, avg=248.14, stdev=46.72 00:18:59.461 lat (usec): min=143, max=535, avg=258.56, stdev=46.87 00:18:59.461 clat percentiles (usec): 00:18:59.461 | 1.00th=[ 145], 5.00th=[ 163], 10.00th=[ 184], 20.00th=[ 221], 00:18:59.461 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:18:59.461 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 322], 00:18:59.461 | 99.00th=[ 371], 99.50th=[ 412], 99.90th=[ 529], 99.95th=[ 529], 00:18:59.461 | 99.99th=[ 529] 00:18:59.461 bw ( KiB/s): min= 4096, max= 4096, per=26.22%, avg=4096.00, stdev= 0.00, samples=1 00:18:59.461 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:59.461 lat (usec) : 250=50.75%, 500=44.94%, 750=0.19% 00:18:59.461 lat (msec) : 50=4.12% 00:18:59.461 cpu : usr=0.67%, sys=0.39%, ctx=536, majf=0, minf=1 00:18:59.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.461 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.461 job2: (groupid=0, jobs=1): err= 0: pid=1458146: Sun Jul 14 13:54:37 2024 00:18:59.461 read: IOPS=21, BW=85.0KiB/s (87.1kB/s)(88.0KiB/1035msec) 00:18:59.461 slat (nsec): min=13290, max=38812, avg=17222.50, stdev=5173.35 00:18:59.461 clat (usec): min=40408, max=43263, avg=41331.35, stdev=666.74 00:18:59.461 lat (usec): min=40426, max=43282, avg=41348.58, stdev=668.20 00:18:59.461 clat percentiles (usec): 00:18:59.461 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:59.461 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:59.461 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:59.461 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:59.461 | 99.99th=[43254] 00:18:59.461 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:18:59.461 slat (nsec): min=9311, max=59743, avg=22857.51, stdev=5174.24 00:18:59.461 clat (usec): min=173, max=393, avg=215.87, stdev=18.64 00:18:59.461 lat (usec): min=199, max=438, avg=238.73, stdev=19.95 00:18:59.461 clat percentiles (usec): 00:18:59.461 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 204], 00:18:59.461 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 217], 00:18:59.461 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 233], 95.00th=[ 243], 00:18:59.461 | 99.00th=[ 281], 99.50th=[ 314], 99.90th=[ 396], 99.95th=[ 396], 00:18:59.461 | 99.99th=[ 396] 00:18:59.461 bw ( KiB/s): min= 4096, max= 4096, per=26.22%, avg=4096.00, stdev= 0.00, samples=1 00:18:59.461 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:59.461 lat (usec) : 250=93.45%, 500=2.43% 00:18:59.461 lat (msec) : 50=4.12% 00:18:59.461 cpu : usr=0.48%, sys=1.74%, ctx=534, majf=0, minf=1 00:18:59.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.461 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.461 job3: (groupid=0, jobs=1): err= 0: pid=1458147: Sun Jul 14 13:54:37 2024 00:18:59.461 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:18:59.461 slat (nsec): min=5781, max=40256, avg=8151.17, stdev=2621.72 00:18:59.461 clat (usec): min=185, max=1033, avg=232.74, stdev=35.00 00:18:59.461 lat (usec): min=191, max=1045, avg=240.89, stdev=36.10 00:18:59.461 clat percentiles (usec): 00:18:59.461 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:18:59.461 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 233], 60.00th=[ 241], 00:18:59.461 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 277], 00:18:59.461 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 594], 99.95th=[ 611], 00:18:59.461 | 99.99th=[ 1037] 00:18:59.461 write: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(9.85MiB/1001msec); 0 zone resets 00:18:59.462 slat (nsec): min=7284, max=73521, avg=14450.76, stdev=8471.01 00:18:59.462 clat (usec): min=134, max=533, avg=180.55, stdev=52.23 00:18:59.462 lat (usec): min=142, max=560, avg=195.00, stdev=58.88 00:18:59.462 clat percentiles (usec): 00:18:59.462 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:18:59.462 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 169], 00:18:59.462 | 70.00th=[ 186], 80.00th=[ 210], 90.00th=[ 243], 95.00th=[ 269], 00:18:59.462 | 99.00th=[ 396], 99.50th=[ 424], 99.90th=[ 494], 99.95th=[ 523], 00:18:59.462 | 99.99th=[ 537] 00:18:59.462 bw ( KiB/s): min= 8192, max= 8192, per=52.44%, avg=8192.00, stdev= 0.00, samples=1 00:18:59.462 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:59.462 lat (usec) : 250=84.25%, 500=15.62%, 750=0.11% 00:18:59.462 lat (msec) : 2=0.02% 00:18:59.462 cpu : usr=3.60%, sys=7.30%, ctx=4571, majf=0, minf=2 00:18:59.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.462 issued rwts: total=2048,2522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.462 00:18:59.462 Run status group 0 (all jobs): 00:18:59.462 READ: bw=8139KiB/s (8334kB/s), 84.7KiB/s-8184KiB/s (86.7kB/s-8380kB/s), io=8456KiB (8659kB), run=1001-1039msec 00:18:59.462 WRITE: bw=15.3MiB/s (16.0MB/s), 1971KiB/s-9.84MiB/s (2018kB/s-10.3MB/s), io=15.9MiB (16.6MB), run=1001-1039msec 00:18:59.462 00:18:59.462 Disk stats (read/write): 00:18:59.462 nvme0n1: ios=68/512, merge=0/0, ticks=1548/99, in_queue=1647, util=93.49% 00:18:59.462 nvme0n2: ios=59/512, merge=0/0, ticks=1222/118, in_queue=1340, util=97.46% 00:18:59.462 nvme0n3: ios=63/512, merge=0/0, ticks=1267/90, in_queue=1357, util=100.00% 00:18:59.462 nvme0n4: ios=1751/2048, merge=0/0, ticks=693/350, in_queue=1043, util=97.57% 00:18:59.462 13:54:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:59.462 [global] 00:18:59.462 thread=1 00:18:59.462 invalidate=1 00:18:59.462 rw=write 00:18:59.462 time_based=1 00:18:59.462 runtime=1 00:18:59.462 ioengine=libaio 00:18:59.462 direct=1 00:18:59.462 bs=4096 00:18:59.462 iodepth=128 00:18:59.462 norandommap=0 00:18:59.462 numjobs=1 00:18:59.462 00:18:59.462 verify_dump=1 00:18:59.462 verify_backlog=512 00:18:59.462 verify_state_save=0 00:18:59.462 do_verify=1 00:18:59.462 verify=crc32c-intel 00:18:59.462 [job0] 00:18:59.462 filename=/dev/nvme0n1 00:18:59.462 [job1] 00:18:59.462 filename=/dev/nvme0n2 00:18:59.462 [job2] 00:18:59.462 filename=/dev/nvme0n3 00:18:59.462 [job3] 00:18:59.462 filename=/dev/nvme0n4 00:18:59.462 Could not set queue depth (nvme0n1) 00:18:59.462 Could not set queue depth (nvme0n2) 00:18:59.462 Could not set queue depth (nvme0n3) 00:18:59.462 Could not set queue depth (nvme0n4) 00:18:59.722 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:59.722 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:59.722 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:59.722 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:59.722 fio-3.35 00:18:59.722 Starting 4 threads 00:19:01.098 00:19:01.098 job0: (groupid=0, jobs=1): err= 0: pid=1458388: Sun Jul 14 13:54:38 2024 00:19:01.098 read: IOPS=3732, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1005msec) 00:19:01.098 slat (usec): min=2, max=16762, avg=105.53, stdev=803.30 00:19:01.098 clat (usec): min=4514, max=52824, avg=13606.23, stdev=6315.77 00:19:01.098 lat (usec): min=4518, max=52838, avg=13711.76, stdev=6378.39 00:19:01.098 clat percentiles (usec): 00:19:01.098 | 1.00th=[ 6259], 5.00th=[ 8291], 10.00th=[ 9634], 20.00th=[10552], 00:19:01.098 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11994], 60.00th=[12387], 00:19:01.098 | 70.00th=[13042], 80.00th=[13829], 90.00th=[19792], 95.00th=[28443], 00:19:01.098 | 99.00th=[40109], 99.50th=[46924], 99.90th=[47449], 99.95th=[47449], 00:19:01.098 | 99.99th=[52691] 00:19:01.098 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:19:01.098 slat (usec): min=3, max=26258, avg=131.57, stdev=924.88 00:19:01.098 clat (usec): min=603, max=63121, avg=18313.80, stdev=13211.16 00:19:01.098 lat (usec): min=839, max=63134, avg=18445.36, stdev=13297.96 00:19:01.098 clat percentiles (usec): 00:19:01.098 | 1.00th=[ 3621], 5.00th=[ 6521], 10.00th=[ 7111], 20.00th=[ 9634], 00:19:01.098 | 30.00th=[11076], 40.00th=[11994], 50.00th=[12911], 60.00th=[15533], 00:19:01.098 | 70.00th=[19792], 80.00th=[24773], 90.00th=[40109], 95.00th=[48497], 00:19:01.098 | 99.00th=[62129], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:19:01.098 | 99.99th=[63177] 00:19:01.098 bw ( KiB/s): min=12288, max=20480, per=28.82%, avg=16384.00, stdev=5792.62, samples=2 00:19:01.098 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:19:01.098 lat (usec) : 750=0.01%, 1000=0.08% 00:19:01.098 lat (msec) : 4=0.47%, 10=18.03%, 20=61.58%, 50=17.38%, 100=2.45% 00:19:01.098 cpu : usr=2.79%, sys=4.48%, ctx=373, majf=0, minf=1 00:19:01.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:01.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:01.098 issued rwts: total=3751,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:01.098 job1: (groupid=0, jobs=1): err= 0: pid=1458400: Sun Jul 14 13:54:38 2024 00:19:01.098 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:19:01.098 slat (usec): min=2, max=32524, avg=102.12, stdev=882.01 00:19:01.098 clat (usec): min=3944, max=76173, avg=14164.57, stdev=10056.98 00:19:01.098 lat (usec): min=3953, max=76183, avg=14266.70, stdev=10125.92 00:19:01.098 clat percentiles (usec): 00:19:01.098 | 1.00th=[ 4293], 5.00th=[ 8094], 10.00th=[ 9503], 20.00th=[10421], 00:19:01.098 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12649], 00:19:01.098 | 70.00th=[13566], 80.00th=[15008], 90.00th=[16909], 95.00th=[23725], 00:19:01.098 | 99.00th=[72877], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:19:01.098 | 99.99th=[76022] 00:19:01.098 write: IOPS=4516, BW=17.6MiB/s (18.5MB/s)(17.8MiB/1011msec); 0 zone resets 00:19:01.098 slat (usec): min=3, max=15677, avg=97.67, stdev=625.72 00:19:01.098 clat (usec): min=217, max=76161, avg=14962.27, stdev=8184.96 00:19:01.098 lat (usec): min=788, max=76197, avg=15059.93, stdev=8239.48 00:19:01.098 clat percentiles (usec): 00:19:01.098 | 1.00th=[ 2089], 5.00th=[ 5604], 10.00th=[ 7242], 20.00th=[ 9503], 00:19:01.098 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11731], 60.00th=[13173], 00:19:01.098 | 70.00th=[18220], 80.00th=[21890], 90.00th=[25035], 95.00th=[30278], 00:19:01.098 | 99.00th=[44303], 99.50th=[45876], 99.90th=[53216], 99.95th=[56886], 00:19:01.098 | 99.99th=[76022] 00:19:01.098 bw ( KiB/s): min=16384, max=19128, per=31.23%, avg=17756.00, stdev=1940.30, samples=2 00:19:01.098 iops : min= 4096, max= 4782, avg=4439.00, stdev=485.08, samples=2 00:19:01.098 lat (usec) : 250=0.01%, 1000=0.24% 00:19:01.098 lat (msec) : 2=0.20%, 4=0.94%, 10=21.51%, 20=58.92%, 50=16.73% 00:19:01.098 lat (msec) : 100=1.45% 00:19:01.098 cpu : usr=3.96%, sys=5.05%, ctx=372, majf=0, minf=1 00:19:01.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:01.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:01.098 issued rwts: total=4096,4566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:01.098 job2: (groupid=0, jobs=1): err= 0: pid=1458439: Sun Jul 14 13:54:38 2024 00:19:01.098 read: IOPS=2007, BW=8031KiB/s (8224kB/s)(8192KiB/1020msec) 00:19:01.098 slat (usec): min=3, max=36361, avg=244.59, stdev=1775.94 00:19:01.098 clat (msec): min=4, max=103, avg=28.78, stdev=22.13 00:19:01.098 lat (msec): min=4, max=103, avg=29.02, stdev=22.27 00:19:01.098 clat percentiles (msec): 00:19:01.098 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 13], 20.00th=[ 15], 00:19:01.098 | 30.00th=[ 18], 40.00th=[ 22], 50.00th=[ 22], 60.00th=[ 24], 00:19:01.098 | 70.00th=[ 27], 80.00th=[ 35], 90.00th=[ 59], 95.00th=[ 81], 00:19:01.098 | 99.00th=[ 104], 99.50th=[ 104], 99.90th=[ 104], 99.95th=[ 104], 00:19:01.098 | 99.99th=[ 104] 00:19:01.098 write: IOPS=2343, BW=9373KiB/s (9597kB/s)(9560KiB/1020msec); 0 zone resets 00:19:01.098 slat (usec): min=4, max=21301, avg=198.77, stdev=1282.56 00:19:01.098 clat (usec): min=3723, max=90861, avg=29285.07, stdev=17776.17 00:19:01.098 lat (usec): min=3729, max=90875, avg=29483.84, stdev=17843.81 00:19:01.098 clat percentiles (usec): 00:19:01.098 | 1.00th=[ 6128], 5.00th=[10159], 10.00th=[11863], 20.00th=[16188], 00:19:01.098 | 30.00th=[17433], 40.00th=[20317], 50.00th=[23725], 60.00th=[25560], 00:19:01.098 | 70.00th=[35390], 80.00th=[46400], 90.00th=[55313], 95.00th=[66847], 00:19:01.098 | 99.00th=[86508], 99.50th=[86508], 99.90th=[90702], 99.95th=[90702], 00:19:01.098 | 99.99th=[90702] 00:19:01.098 bw ( KiB/s): min= 7600, max=10496, per=15.91%, avg=9048.00, stdev=2047.78, samples=2 00:19:01.098 iops : min= 1900, max= 2624, avg=2262.00, stdev=511.95, samples=2 00:19:01.098 lat (msec) : 4=0.14%, 10=5.54%, 20=30.51%, 50=50.79%, 100=11.63% 00:19:01.098 lat (msec) : 250=1.40% 00:19:01.098 cpu : usr=3.53%, sys=3.04%, ctx=209, majf=0, minf=1 00:19:01.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:01.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:01.098 issued rwts: total=2048,2390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:01.098 job3: (groupid=0, jobs=1): err= 0: pid=1458446: Sun Jul 14 13:54:38 2024 00:19:01.098 read: IOPS=3011, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1020msec) 00:19:01.098 slat (usec): min=3, max=19742, avg=135.24, stdev=963.57 00:19:01.098 clat (usec): min=5103, max=47077, avg=16551.63, stdev=5891.17 00:19:01.098 lat (usec): min=5112, max=47091, avg=16686.87, stdev=5982.74 00:19:01.098 clat percentiles (usec): 00:19:01.098 | 1.00th=[ 8717], 5.00th=[10552], 10.00th=[10945], 20.00th=[11863], 00:19:01.098 | 30.00th=[13042], 40.00th=[14353], 50.00th=[14877], 60.00th=[15139], 00:19:01.098 | 70.00th=[16712], 80.00th=[22152], 90.00th=[26870], 95.00th=[30278], 00:19:01.098 | 99.00th=[32113], 99.50th=[33424], 99.90th=[39060], 99.95th=[42730], 00:19:01.098 | 99.99th=[46924] 00:19:01.098 write: IOPS=3378, BW=13.2MiB/s (13.8MB/s)(13.5MiB/1020msec); 0 zone resets 00:19:01.098 slat (usec): min=4, max=12763, avg=157.55, stdev=693.52 00:19:01.098 clat (usec): min=4009, max=68340, avg=22795.48, stdev=11637.82 00:19:01.098 lat (usec): min=4018, max=68349, avg=22953.04, stdev=11717.50 00:19:01.098 clat percentiles (usec): 00:19:01.098 | 1.00th=[ 5735], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[13173], 00:19:01.098 | 30.00th=[14222], 40.00th=[17695], 50.00th=[20317], 60.00th=[21365], 00:19:01.098 | 70.00th=[26608], 80.00th=[32637], 90.00th=[40109], 95.00th=[46400], 00:19:01.098 | 99.00th=[51119], 99.50th=[60556], 99.90th=[68682], 99.95th=[68682], 00:19:01.098 | 99.99th=[68682] 00:19:01.098 bw ( KiB/s): min=10496, max=16048, per=23.34%, avg=13272.00, stdev=3925.86, samples=2 00:19:01.098 iops : min= 2624, max= 4012, avg=3318.00, stdev=981.46, samples=2 00:19:01.098 lat (msec) : 10=4.36%, 20=57.24%, 50=37.82%, 100=0.58% 00:19:01.098 cpu : usr=4.71%, sys=6.58%, ctx=422, majf=0, minf=1 00:19:01.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:01.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:01.099 issued rwts: total=3072,3446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:01.099 00:19:01.099 Run status group 0 (all jobs): 00:19:01.099 READ: bw=49.7MiB/s (52.1MB/s), 8031KiB/s-15.8MiB/s (8224kB/s-16.6MB/s), io=50.7MiB (53.1MB), run=1005-1020msec 00:19:01.099 WRITE: bw=55.5MiB/s (58.2MB/s), 9373KiB/s-17.6MiB/s (9597kB/s-18.5MB/s), io=56.6MiB (59.4MB), run=1005-1020msec 00:19:01.099 00:19:01.099 Disk stats (read/write): 00:19:01.099 nvme0n1: ios=2847/3072, merge=0/0, ticks=25671/34437, in_queue=60108, util=99.30% 00:19:01.099 nvme0n2: ios=3121/3584, merge=0/0, ticks=43949/55645, in_queue=99594, util=94.81% 00:19:01.099 nvme0n3: ios=1592/1967, merge=0/0, ticks=15104/17491, in_queue=32595, util=89.83% 00:19:01.099 nvme0n4: ios=2848/3072, merge=0/0, ticks=42287/62360, in_queue=104647, util=96.61% 00:19:01.099 13:54:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:01.099 [global] 00:19:01.099 thread=1 00:19:01.099 invalidate=1 00:19:01.099 rw=randwrite 00:19:01.099 time_based=1 00:19:01.099 runtime=1 00:19:01.099 ioengine=libaio 00:19:01.099 direct=1 00:19:01.099 bs=4096 00:19:01.099 iodepth=128 00:19:01.099 norandommap=0 00:19:01.099 numjobs=1 00:19:01.099 00:19:01.099 verify_dump=1 00:19:01.099 verify_backlog=512 00:19:01.099 verify_state_save=0 00:19:01.099 do_verify=1 00:19:01.099 verify=crc32c-intel 00:19:01.099 [job0] 00:19:01.099 filename=/dev/nvme0n1 00:19:01.099 [job1] 00:19:01.099 filename=/dev/nvme0n2 00:19:01.099 [job2] 00:19:01.099 filename=/dev/nvme0n3 00:19:01.099 [job3] 00:19:01.099 filename=/dev/nvme0n4 00:19:01.099 Could not set queue depth (nvme0n1) 00:19:01.099 Could not set queue depth (nvme0n2) 00:19:01.099 Could not set queue depth (nvme0n3) 00:19:01.099 Could not set queue depth (nvme0n4) 00:19:01.099 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:01.099 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:01.099 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:01.099 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:01.099 fio-3.35 00:19:01.099 Starting 4 threads 00:19:02.472 00:19:02.472 job0: (groupid=0, jobs=1): err= 0: pid=1458724: Sun Jul 14 13:54:40 2024 00:19:02.472 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:19:02.472 slat (usec): min=2, max=11118, avg=107.12, stdev=711.21 00:19:02.472 clat (usec): min=1027, max=40022, avg=12903.11, stdev=3812.43 00:19:02.472 lat (usec): min=1045, max=40034, avg=13010.23, stdev=3860.51 00:19:02.472 clat percentiles (usec): 00:19:02.472 | 1.00th=[ 3261], 5.00th=[ 6259], 10.00th=[ 9634], 20.00th=[11338], 00:19:02.472 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:19:02.472 | 70.00th=[13173], 80.00th=[14484], 90.00th=[16581], 95.00th=[20055], 00:19:02.472 | 99.00th=[23462], 99.50th=[32375], 99.90th=[40109], 99.95th=[40109], 00:19:02.472 | 99.99th=[40109] 00:19:02.472 write: IOPS=4911, BW=19.2MiB/s (20.1MB/s)(19.4MiB/1011msec); 0 zone resets 00:19:02.472 slat (usec): min=3, max=17474, avg=91.13, stdev=630.60 00:19:02.472 clat (usec): min=1572, max=51504, avg=13835.87, stdev=7013.36 00:19:02.472 lat (usec): min=1586, max=51510, avg=13927.00, stdev=7050.62 00:19:02.472 clat percentiles (usec): 00:19:02.472 | 1.00th=[ 2278], 5.00th=[ 4621], 10.00th=[ 7439], 20.00th=[10683], 00:19:02.472 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[13042], 00:19:02.472 | 70.00th=[13304], 80.00th=[16057], 90.00th=[19268], 95.00th=[28705], 00:19:02.472 | 99.00th=[43254], 99.50th=[47973], 99.90th=[51643], 99.95th=[51643], 00:19:02.472 | 99.99th=[51643] 00:19:02.472 bw ( KiB/s): min=18664, max=20048, per=26.11%, avg=19356.00, stdev=978.64, samples=2 00:19:02.472 iops : min= 4666, max= 5012, avg=4839.00, stdev=244.66, samples=2 00:19:02.472 lat (msec) : 2=0.41%, 4=2.60%, 10=13.27%, 20=76.24%, 50=7.36% 00:19:02.472 lat (msec) : 100=0.13% 00:19:02.472 cpu : usr=4.06%, sys=6.14%, ctx=507, majf=0, minf=9 00:19:02.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:02.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.472 issued rwts: total=4608,4966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.472 job1: (groupid=0, jobs=1): err= 0: pid=1458725: Sun Jul 14 13:54:40 2024 00:19:02.472 read: IOPS=4976, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1002msec) 00:19:02.472 slat (usec): min=2, max=10463, avg=99.14, stdev=602.62 00:19:02.472 clat (usec): min=841, max=26419, avg=12803.96, stdev=2851.35 00:19:02.472 lat (usec): min=4510, max=26448, avg=12903.10, stdev=2887.79 00:19:02.472 clat percentiles (usec): 00:19:02.472 | 1.00th=[ 4948], 5.00th=[ 8094], 10.00th=[ 9765], 20.00th=[11207], 00:19:02.472 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:19:02.472 | 70.00th=[13173], 80.00th=[13829], 90.00th=[15795], 95.00th=[18744], 00:19:02.472 | 99.00th=[22414], 99.50th=[23200], 99.90th=[23987], 99.95th=[23987], 00:19:02.472 | 99.99th=[26346] 00:19:02.472 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:19:02.472 slat (usec): min=3, max=10303, avg=86.53, stdev=498.37 00:19:02.472 clat (usec): min=477, max=47782, avg=12358.74, stdev=3854.26 00:19:02.472 lat (usec): min=607, max=47789, avg=12445.27, stdev=3883.10 00:19:02.472 clat percentiles (usec): 00:19:02.472 | 1.00th=[ 3458], 5.00th=[ 7177], 10.00th=[ 9241], 20.00th=[11076], 00:19:02.472 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:19:02.472 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13304], 95.00th=[16188], 00:19:02.472 | 99.00th=[31327], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:19:02.472 | 99.99th=[47973] 00:19:02.472 bw ( KiB/s): min=20480, max=20480, per=27.63%, avg=20480.00, stdev= 0.00, samples=2 00:19:02.472 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:19:02.472 lat (usec) : 500=0.01%, 1000=0.01% 00:19:02.472 lat (msec) : 4=0.58%, 10=13.33%, 20=83.34%, 50=2.73% 00:19:02.472 cpu : usr=5.59%, sys=7.99%, ctx=466, majf=0, minf=11 00:19:02.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:02.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.472 issued rwts: total=4986,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.472 job2: (groupid=0, jobs=1): err= 0: pid=1458726: Sun Jul 14 13:54:40 2024 00:19:02.472 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:19:02.472 slat (usec): min=2, max=25088, avg=121.53, stdev=929.22 00:19:02.472 clat (usec): min=4989, max=45745, avg=15346.47, stdev=4637.55 00:19:02.472 lat (usec): min=4998, max=45750, avg=15467.99, stdev=4690.61 00:19:02.472 clat percentiles (usec): 00:19:02.472 | 1.00th=[ 6194], 5.00th=[10290], 10.00th=[11076], 20.00th=[12649], 00:19:02.473 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14222], 60.00th=[15008], 00:19:02.473 | 70.00th=[15664], 80.00th=[17695], 90.00th=[21365], 95.00th=[24511], 00:19:02.473 | 99.00th=[31065], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:19:02.473 | 99.99th=[45876] 00:19:02.473 write: IOPS=4530, BW=17.7MiB/s (18.6MB/s)(17.9MiB/1013msec); 0 zone resets 00:19:02.473 slat (usec): min=4, max=19013, avg=96.89, stdev=614.18 00:19:02.473 clat (usec): min=2105, max=37629, avg=14005.08, stdev=4167.32 00:19:02.473 lat (usec): min=2125, max=37635, avg=14101.97, stdev=4201.60 00:19:02.473 clat percentiles (usec): 00:19:02.473 | 1.00th=[ 5407], 5.00th=[ 7635], 10.00th=[ 9765], 20.00th=[11731], 00:19:02.473 | 30.00th=[12649], 40.00th=[13698], 50.00th=[14222], 60.00th=[14484], 00:19:02.473 | 70.00th=[14746], 80.00th=[15664], 90.00th=[16450], 95.00th=[20055], 00:19:02.473 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[37487], 00:19:02.473 | 99.99th=[37487] 00:19:02.473 bw ( KiB/s): min=16880, max=18816, per=24.08%, avg=17848.00, stdev=1368.96, samples=2 00:19:02.473 iops : min= 4220, max= 4704, avg=4462.00, stdev=342.24, samples=2 00:19:02.473 lat (msec) : 4=0.23%, 10=7.44%, 20=83.58%, 50=8.75% 00:19:02.473 cpu : usr=5.24%, sys=9.19%, ctx=465, majf=0, minf=15 00:19:02.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:02.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.473 issued rwts: total=4096,4589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.473 job3: (groupid=0, jobs=1): err= 0: pid=1458727: Sun Jul 14 13:54:40 2024 00:19:02.473 read: IOPS=3939, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1005msec) 00:19:02.473 slat (usec): min=3, max=11748, avg=122.32, stdev=714.14 00:19:02.473 clat (usec): min=790, max=42720, avg=15704.51, stdev=3343.70 00:19:02.473 lat (usec): min=4945, max=42725, avg=15826.84, stdev=3371.20 00:19:02.473 clat percentiles (usec): 00:19:02.473 | 1.00th=[ 5473], 5.00th=[11600], 10.00th=[12518], 20.00th=[14222], 00:19:02.473 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15270], 60.00th=[15664], 00:19:02.473 | 70.00th=[15926], 80.00th=[17171], 90.00th=[19792], 95.00th=[22152], 00:19:02.473 | 99.00th=[23987], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:19:02.473 | 99.99th=[42730] 00:19:02.473 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:19:02.473 slat (usec): min=3, max=17788, avg=117.12, stdev=722.03 00:19:02.473 clat (usec): min=5761, max=60825, avg=15863.42, stdev=5267.05 00:19:02.473 lat (usec): min=6058, max=60830, avg=15980.53, stdev=5296.69 00:19:02.473 clat percentiles (usec): 00:19:02.473 | 1.00th=[10159], 5.00th=[11207], 10.00th=[13304], 20.00th=[13960], 00:19:02.473 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15139], 00:19:02.473 | 70.00th=[15401], 80.00th=[15795], 90.00th=[18744], 95.00th=[21890], 00:19:02.473 | 99.00th=[47973], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:19:02.473 | 99.99th=[61080] 00:19:02.473 bw ( KiB/s): min=16384, max=16384, per=22.10%, avg=16384.00, stdev= 0.00, samples=2 00:19:02.473 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:02.473 lat (usec) : 1000=0.01% 00:19:02.473 lat (msec) : 10=1.03%, 20=90.58%, 50=8.37%, 100=0.01% 00:19:02.473 cpu : usr=4.98%, sys=7.37%, ctx=351, majf=0, minf=17 00:19:02.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:02.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.473 issued rwts: total=3959,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.473 00:19:02.473 Run status group 0 (all jobs): 00:19:02.473 READ: bw=68.1MiB/s (71.4MB/s), 15.4MiB/s-19.4MiB/s (16.1MB/s-20.4MB/s), io=68.9MiB (72.3MB), run=1002-1013msec 00:19:02.473 WRITE: bw=72.4MiB/s (75.9MB/s), 15.9MiB/s-20.0MiB/s (16.7MB/s-20.9MB/s), io=73.3MiB (76.9MB), run=1002-1013msec 00:19:02.473 00:19:02.473 Disk stats (read/write): 00:19:02.473 nvme0n1: ios=3987/4096, merge=0/0, ticks=42696/51605, in_queue=94301, util=100.00% 00:19:02.473 nvme0n2: ios=4156/4608, merge=0/0, ticks=29497/33752, in_queue=63249, util=88.02% 00:19:02.473 nvme0n3: ios=3602/3695, merge=0/0, ticks=53887/48324, in_queue=102211, util=93.65% 00:19:02.473 nvme0n4: ios=3257/3584, merge=0/0, ticks=17390/19450, in_queue=36840, util=93.91% 00:19:02.473 13:54:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:02.473 13:54:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1458864 00:19:02.473 13:54:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:02.473 13:54:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:02.473 [global] 00:19:02.473 thread=1 00:19:02.473 invalidate=1 00:19:02.473 rw=read 00:19:02.473 time_based=1 00:19:02.473 runtime=10 00:19:02.473 ioengine=libaio 00:19:02.473 direct=1 00:19:02.473 bs=4096 00:19:02.473 iodepth=1 00:19:02.473 norandommap=1 00:19:02.473 numjobs=1 00:19:02.473 00:19:02.473 [job0] 00:19:02.473 filename=/dev/nvme0n1 00:19:02.473 [job1] 00:19:02.473 filename=/dev/nvme0n2 00:19:02.473 [job2] 00:19:02.473 filename=/dev/nvme0n3 00:19:02.473 [job3] 00:19:02.473 filename=/dev/nvme0n4 00:19:02.473 Could not set queue depth (nvme0n1) 00:19:02.473 Could not set queue depth (nvme0n2) 00:19:02.473 Could not set queue depth (nvme0n3) 00:19:02.473 Could not set queue depth (nvme0n4) 00:19:02.473 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.473 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.473 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.473 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.473 fio-3.35 00:19:02.473 Starting 4 threads 00:19:05.747 13:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:05.747 13:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:05.747 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=389120, buflen=4096 00:19:05.747 fio: pid=1458960, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:05.747 13:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:05.747 13:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:05.747 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=24956928, buflen=4096 00:19:05.747 fio: pid=1458958, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:06.311 13:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:06.311 13:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:06.311 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=344064, buflen=4096 00:19:06.312 fio: pid=1458954, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:06.312 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=15695872, buflen=4096 00:19:06.312 fio: pid=1458957, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:06.312 13:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:06.312 13:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:06.312 00:19:06.312 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1458954: Sun Jul 14 13:54:44 2024 00:19:06.312 read: IOPS=24, BW=97.7KiB/s (100kB/s)(336KiB/3438msec) 00:19:06.312 slat (usec): min=6, max=23947, avg=382.49, stdev=2693.27 00:19:06.312 clat (usec): min=246, max=42045, avg=40205.31, stdev=6236.18 00:19:06.312 lat (usec): min=264, max=64995, avg=40591.98, stdev=6845.35 00:19:06.312 clat percentiles (usec): 00:19:06.312 | 1.00th=[ 247], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:06.312 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:06.312 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:19:06.312 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.312 | 99.99th=[42206] 00:19:06.312 bw ( KiB/s): min= 96, max= 104, per=0.89%, avg=98.67, stdev= 4.13, samples=6 00:19:06.312 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:19:06.312 lat (usec) : 250=1.18%, 1000=1.18% 00:19:06.312 lat (msec) : 50=96.47% 00:19:06.312 cpu : usr=0.00%, sys=0.06%, ctx=87, majf=0, minf=1 00:19:06.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.312 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.312 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.312 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1458957: Sun Jul 14 13:54:44 2024 00:19:06.312 read: IOPS=1044, BW=4178KiB/s (4278kB/s)(15.0MiB/3669msec) 00:19:06.312 slat (usec): min=4, max=8944, avg=18.17, stdev=212.71 00:19:06.312 clat (usec): min=177, max=42058, avg=930.08, stdev=5247.30 00:19:06.312 lat (usec): min=182, max=49996, avg=946.25, stdev=5268.84 00:19:06.312 clat percentiles (usec): 00:19:06.312 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:19:06.312 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 277], 00:19:06.312 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 367], 00:19:06.312 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.312 | 99.99th=[42206] 00:19:06.312 bw ( KiB/s): min= 96, max=16576, per=37.66%, avg=4148.00, stdev=6527.70, samples=7 00:19:06.312 iops : min= 24, max= 4144, avg=1037.00, stdev=1631.93, samples=7 00:19:06.312 lat (usec) : 250=57.94%, 500=40.33%, 750=0.05% 00:19:06.312 lat (msec) : 50=1.64% 00:19:06.312 cpu : usr=0.68%, sys=1.72%, ctx=3840, majf=0, minf=1 00:19:06.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.312 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.312 issued rwts: total=3833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.312 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1458958: Sun Jul 14 13:54:44 2024 00:19:06.312 read: IOPS=1933, BW=7732KiB/s (7918kB/s)(23.8MiB/3152msec) 00:19:06.312 slat (nsec): min=4083, max=89883, avg=9434.65, stdev=6131.53 00:19:06.312 clat (usec): min=173, max=42054, avg=501.96, stdev=3278.13 00:19:06.312 lat (usec): min=177, max=42144, avg=511.39, stdev=3279.19 00:19:06.312 clat percentiles (usec): 00:19:06.312 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:19:06.312 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 217], 00:19:06.312 | 70.00th=[ 265], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 347], 00:19:06.312 | 99.00th=[ 537], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:06.312 | 99.99th=[42206] 00:19:06.312 bw ( KiB/s): min= 176, max=19496, per=73.70%, avg=8118.67, stdev=7724.09, samples=6 00:19:06.312 iops : min= 44, max= 4874, avg=2029.67, stdev=1931.02, samples=6 00:19:06.312 lat (usec) : 250=69.02%, 500=29.82%, 750=0.39%, 1000=0.10% 00:19:06.312 lat (msec) : 2=0.02%, 50=0.64% 00:19:06.312 cpu : usr=1.08%, sys=2.32%, ctx=6096, majf=0, minf=1 00:19:06.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.312 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.312 issued rwts: total=6094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.312 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1458960: Sun Jul 14 13:54:44 2024 00:19:06.312 read: IOPS=32, BW=130KiB/s (133kB/s)(380KiB/2925msec) 00:19:06.312 slat (nsec): min=11202, max=47625, avg=23274.45, stdev=9977.42 00:19:06.312 clat (usec): min=278, max=42418, avg=30527.90, stdev=18060.93 00:19:06.312 lat (usec): min=315, max=42429, avg=30551.27, stdev=18060.39 00:19:06.312 clat percentiles (usec): 00:19:06.312 | 1.00th=[ 281], 5.00th=[ 383], 10.00th=[ 424], 20.00th=[ 523], 00:19:06.312 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:06.312 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:19:06.312 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:06.312 | 99.99th=[42206] 00:19:06.312 bw ( KiB/s): min= 96, max= 200, per=1.23%, avg=136.00, stdev=43.08, samples=5 00:19:06.312 iops : min= 24, max= 50, avg=34.00, stdev=10.77, samples=5 00:19:06.312 lat (usec) : 500=14.58%, 750=11.46% 00:19:06.312 lat (msec) : 50=72.92% 00:19:06.312 cpu : usr=0.00%, sys=0.14%, ctx=99, majf=0, minf=1 00:19:06.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.312 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.312 issued rwts: total=96,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.312 00:19:06.312 Run status group 0 (all jobs): 00:19:06.312 READ: bw=10.8MiB/s (11.3MB/s), 97.7KiB/s-7732KiB/s (100kB/s-7918kB/s), io=39.5MiB (41.4MB), run=2925-3669msec 00:19:06.312 00:19:06.312 Disk stats (read/write): 00:19:06.312 nvme0n1: ios=82/0, merge=0/0, ticks=3296/0, in_queue=3296, util=95.05% 00:19:06.312 nvme0n2: ios=3715/0, merge=0/0, ticks=3751/0, in_queue=3751, util=99.89% 00:19:06.312 nvme0n3: ios=6092/0, merge=0/0, ticks=2975/0, in_queue=2975, util=96.72% 00:19:06.312 nvme0n4: ios=143/0, merge=0/0, ticks=3382/0, in_queue=3382, util=99.73% 00:19:06.570 13:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:06.570 13:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:06.828 13:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:06.828 13:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:07.086 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:07.086 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:07.343 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:07.343 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:07.600 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:07.600 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1458864 00:19:07.600 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:07.600 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.857 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:07.857 13:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:07.858 13:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:07.858 13:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.858 13:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:07.858 13:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.858 13:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:07.858 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:07.858 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:07.858 nvmf hotplug test: fio failed as expected 00:19:07.858 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.114 rmmod nvme_tcp 00:19:08.114 rmmod nvme_fabrics 00:19:08.114 rmmod nvme_keyring 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1456841 ']' 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1456841 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 1456841 ']' 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 1456841 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:08.114 13:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1456841 00:19:08.114 13:54:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:08.114 13:54:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:08.114 13:54:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1456841' 00:19:08.114 killing process with pid 1456841 00:19:08.114 13:54:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 1456841 00:19:08.114 13:54:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 1456841 00:19:08.371 13:54:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:08.371 13:54:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:08.371 13:54:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:08.371 13:54:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.371 13:54:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.371 13:54:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.371 13:54:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.371 13:54:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.898 13:54:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:10.898 00:19:10.898 real 0m23.387s 00:19:10.898 user 1m21.510s 00:19:10.898 sys 0m6.252s 00:19:10.898 13:54:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:10.898 13:54:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.898 ************************************ 00:19:10.898 END TEST nvmf_fio_target 00:19:10.898 ************************************ 00:19:10.898 13:54:48 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:10.898 13:54:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:10.898 13:54:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:10.898 13:54:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:10.898 ************************************ 00:19:10.898 START TEST nvmf_bdevio 00:19:10.898 ************************************ 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:10.898 * Looking for test storage... 00:19:10.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:10.898 13:54:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:12.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:12.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.801 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:12.802 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:12.802 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:12.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:19:12.802 00:19:12.802 --- 10.0.0.2 ping statistics --- 00:19:12.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.802 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:19:12.802 00:19:12.802 --- 10.0.0.1 ping statistics --- 00:19:12.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.802 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1461570 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1461570 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 1461570 ']' 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:12.802 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:12.802 [2024-07-14 13:54:50.587984] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:12.802 [2024-07-14 13:54:50.588083] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.802 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.802 [2024-07-14 13:54:50.654310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.802 [2024-07-14 13:54:50.743566] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.802 [2024-07-14 13:54:50.743624] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.802 [2024-07-14 13:54:50.743653] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.802 [2024-07-14 13:54:50.743665] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.802 [2024-07-14 13:54:50.743674] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.802 [2024-07-14 13:54:50.743765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:12.802 [2024-07-14 13:54:50.743854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:12.802 [2024-07-14 13:54:50.744279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:12.802 [2024-07-14 13:54:50.744284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.060 [2024-07-14 13:54:50.898762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.060 Malloc0 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.060 [2024-07-14 13:54:50.952317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:13.060 { 00:19:13.060 "params": { 00:19:13.060 "name": "Nvme$subsystem", 00:19:13.060 "trtype": "$TEST_TRANSPORT", 00:19:13.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:13.060 "adrfam": "ipv4", 00:19:13.060 "trsvcid": "$NVMF_PORT", 00:19:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:13.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:13.060 "hdgst": ${hdgst:-false}, 00:19:13.060 "ddgst": ${ddgst:-false} 00:19:13.060 }, 00:19:13.060 "method": "bdev_nvme_attach_controller" 00:19:13.060 } 00:19:13.060 EOF 00:19:13.060 )") 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:13.060 13:54:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:13.060 "params": { 00:19:13.060 "name": "Nvme1", 00:19:13.060 "trtype": "tcp", 00:19:13.060 "traddr": "10.0.0.2", 00:19:13.060 "adrfam": "ipv4", 00:19:13.060 "trsvcid": "4420", 00:19:13.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.060 "hdgst": false, 00:19:13.060 "ddgst": false 00:19:13.060 }, 00:19:13.060 "method": "bdev_nvme_attach_controller" 00:19:13.060 }' 00:19:13.060 [2024-07-14 13:54:51.000344] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:13.061 [2024-07-14 13:54:51.000409] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461598 ] 00:19:13.061 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.318 [2024-07-14 13:54:51.062263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:13.318 [2024-07-14 13:54:51.154549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.318 [2024-07-14 13:54:51.154598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.318 [2024-07-14 13:54:51.154601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.576 I/O targets: 00:19:13.576 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:13.576 00:19:13.576 00:19:13.576 CUnit - A unit testing framework for C - Version 2.1-3 00:19:13.576 http://cunit.sourceforge.net/ 00:19:13.576 00:19:13.576 00:19:13.576 Suite: bdevio tests on: Nvme1n1 00:19:13.576 Test: blockdev write read block ...passed 00:19:13.576 Test: blockdev write zeroes read block ...passed 00:19:13.576 Test: blockdev write zeroes read no split ...passed 00:19:13.576 Test: blockdev write zeroes read split ...passed 00:19:13.576 Test: blockdev write zeroes read split partial ...passed 00:19:13.576 Test: blockdev reset ...[2024-07-14 13:54:51.476920] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:13.576 [2024-07-14 13:54:51.477027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7bf80 (9): Bad file descriptor 00:19:13.576 [2024-07-14 13:54:51.493926] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:13.576 passed 00:19:13.576 Test: blockdev write read 8 blocks ...passed 00:19:13.576 Test: blockdev write read size > 128k ...passed 00:19:13.576 Test: blockdev write read invalid size ...passed 00:19:13.834 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:13.834 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:13.834 Test: blockdev write read max offset ...passed 00:19:13.834 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:13.834 Test: blockdev writev readv 8 blocks ...passed 00:19:13.834 Test: blockdev writev readv 30 x 1block ...passed 00:19:13.834 Test: blockdev writev readv block ...passed 00:19:13.834 Test: blockdev writev readv size > 128k ...passed 00:19:13.834 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:13.834 Test: blockdev comparev and writev ...[2024-07-14 13:54:51.707189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.834 [2024-07-14 13:54:51.707224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:13.834 [2024-07-14 13:54:51.707262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.834 [2024-07-14 13:54:51.707290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:13.834 [2024-07-14 13:54:51.707678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.834 [2024-07-14 13:54:51.707705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:13.834 [2024-07-14 13:54:51.707740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.834 [2024-07-14 13:54:51.707766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:13.834 [2024-07-14 13:54:51.708145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.834 [2024-07-14 13:54:51.708172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:13.834 [2024-07-14 13:54:51.708206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.834 [2024-07-14 13:54:51.708233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:13.834 [2024-07-14 13:54:51.708607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.834 [2024-07-14 13:54:51.708633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:13.834 [2024-07-14 13:54:51.708667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.834 [2024-07-14 13:54:51.708694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:13.834 passed 00:19:13.834 Test: blockdev nvme passthru rw ...passed 00:19:13.834 Test: blockdev nvme passthru vendor specific ...[2024-07-14 13:54:51.792162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.834 [2024-07-14 13:54:51.792190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:13.834 [2024-07-14 13:54:51.792354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.834 [2024-07-14 13:54:51.792391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:13.834 [2024-07-14 13:54:51.792547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.834 [2024-07-14 13:54:51.792572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:13.834 [2024-07-14 13:54:51.792725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.834 [2024-07-14 13:54:51.792750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:13.834 passed 00:19:13.834 Test: blockdev nvme admin passthru ...passed 00:19:14.092 Test: blockdev copy ...passed 00:19:14.092 00:19:14.092 Run Summary: Type Total Ran Passed Failed Inactive 00:19:14.092 suites 1 1 n/a 0 0 00:19:14.092 tests 23 23 23 0 0 00:19:14.092 asserts 152 152 152 0 n/a 00:19:14.092 00:19:14.092 Elapsed time = 0.966 seconds 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:14.092 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:14.092 rmmod nvme_tcp 00:19:14.092 rmmod nvme_fabrics 00:19:14.350 rmmod nvme_keyring 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1461570 ']' 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1461570 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 1461570 ']' 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 1461570 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1461570 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1461570' 00:19:14.350 killing process with pid 1461570 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 1461570 00:19:14.350 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 1461570 00:19:14.638 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:14.638 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:14.638 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:14.638 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:14.638 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:14.638 13:54:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.638 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.638 13:54:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.539 13:54:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:16.539 00:19:16.539 real 0m6.107s 00:19:16.539 user 0m9.237s 00:19:16.539 sys 0m2.002s 00:19:16.539 13:54:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:16.539 13:54:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:16.539 ************************************ 00:19:16.539 END TEST nvmf_bdevio 00:19:16.539 ************************************ 00:19:16.539 13:54:54 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:16.539 13:54:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:16.539 13:54:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:16.539 13:54:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:16.539 ************************************ 00:19:16.539 START TEST nvmf_auth_target 00:19:16.539 ************************************ 00:19:16.539 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:16.797 * Looking for test storage... 00:19:16.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:16.797 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:16.798 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.798 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:16.798 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:16.798 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:16.798 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.798 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.798 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.798 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:16.798 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:16.798 13:54:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:16.798 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:18.698 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:18.699 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:18.699 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:18.699 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:18.699 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:18.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:19:18.699 00:19:18.699 --- 10.0.0.2 ping statistics --- 00:19:18.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.699 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:18.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:19:18.699 00:19:18.699 --- 10.0.0.1 ping statistics --- 00:19:18.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.699 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1463668 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1463668 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1463668 ']' 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:18.699 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.957 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1463750 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ddbec272ddc95d91acf1bf3011cb20d46d9d9e4c04358f32 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.SHI 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ddbec272ddc95d91acf1bf3011cb20d46d9d9e4c04358f32 0 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ddbec272ddc95d91acf1bf3011cb20d46d9d9e4c04358f32 0 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ddbec272ddc95d91acf1bf3011cb20d46d9d9e4c04358f32 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:18.958 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.SHI 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.SHI 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.SHI 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=27bd54d342c5ce93e2a935cfc3d49731f886f108f30433e1854695fb1b2b5ea7 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.equ 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 27bd54d342c5ce93e2a935cfc3d49731f886f108f30433e1854695fb1b2b5ea7 3 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 27bd54d342c5ce93e2a935cfc3d49731f886f108f30433e1854695fb1b2b5ea7 3 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=27bd54d342c5ce93e2a935cfc3d49731f886f108f30433e1854695fb1b2b5ea7 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:19.216 13:54:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.equ 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.equ 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.equ 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7a39caf588e085a45412796901e50753 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2Ab 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7a39caf588e085a45412796901e50753 1 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7a39caf588e085a45412796901e50753 1 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.216 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7a39caf588e085a45412796901e50753 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2Ab 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2Ab 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.2Ab 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d190aef3066fb2649863aa1cfc7006d58b7c0f378dc6fa4d 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VB4 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d190aef3066fb2649863aa1cfc7006d58b7c0f378dc6fa4d 2 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d190aef3066fb2649863aa1cfc7006d58b7c0f378dc6fa4d 2 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d190aef3066fb2649863aa1cfc7006d58b7c0f378dc6fa4d 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VB4 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VB4 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.VB4 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c387c2e3797393d17375fa75d182b82be75c403160c0cf79 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9YC 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c387c2e3797393d17375fa75d182b82be75c403160c0cf79 2 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c387c2e3797393d17375fa75d182b82be75c403160c0cf79 2 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c387c2e3797393d17375fa75d182b82be75c403160c0cf79 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9YC 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9YC 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.9YC 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=de9576e3a506e4490ea7ae5032a6ad19 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MmL 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key de9576e3a506e4490ea7ae5032a6ad19 1 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 de9576e3a506e4490ea7ae5032a6ad19 1 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=de9576e3a506e4490ea7ae5032a6ad19 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:19.217 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MmL 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MmL 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.MmL 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9bfd10deac4c0a6a3451e14092f3f58268c94304a495299acaa1977917c8d187 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.j8S 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9bfd10deac4c0a6a3451e14092f3f58268c94304a495299acaa1977917c8d187 3 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9bfd10deac4c0a6a3451e14092f3f58268c94304a495299acaa1977917c8d187 3 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9bfd10deac4c0a6a3451e14092f3f58268c94304a495299acaa1977917c8d187 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.j8S 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.j8S 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.j8S 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1463668 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1463668 ']' 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:19.475 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.746 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:19.746 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:19.746 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1463750 /var/tmp/host.sock 00:19:19.746 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1463750 ']' 00:19:19.746 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:19.746 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:19.746 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:19.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:19.746 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:19.746 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SHI 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.SHI 00:19:20.004 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.SHI 00:19:20.262 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.equ ]] 00:19:20.262 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.equ 00:19:20.262 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.262 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.262 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.262 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.equ 00:19:20.262 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.equ 00:19:20.519 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:20.519 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2Ab 00:19:20.519 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.519 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.519 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.519 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.2Ab 00:19:20.519 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.2Ab 00:19:20.777 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.VB4 ]] 00:19:20.777 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VB4 00:19:20.777 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.777 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.777 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.777 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VB4 00:19:20.777 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VB4 00:19:21.035 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:21.035 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9YC 00:19:21.035 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.035 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.035 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.035 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.9YC 00:19:21.035 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.9YC 00:19:21.293 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.MmL ]] 00:19:21.293 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MmL 00:19:21.293 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.293 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.293 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.293 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MmL 00:19:21.293 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MmL 00:19:21.550 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:21.550 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.j8S 00:19:21.550 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.550 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.550 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.550 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.j8S 00:19:21.550 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.j8S 00:19:21.807 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:21.807 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:21.807 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.807 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.807 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.808 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.066 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.324 00:19:22.324 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.324 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.324 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.582 { 00:19:22.582 "cntlid": 1, 00:19:22.582 "qid": 0, 00:19:22.582 "state": "enabled", 00:19:22.582 "listen_address": { 00:19:22.582 "trtype": "TCP", 00:19:22.582 "adrfam": "IPv4", 00:19:22.582 "traddr": "10.0.0.2", 00:19:22.582 "trsvcid": "4420" 00:19:22.582 }, 00:19:22.582 "peer_address": { 00:19:22.582 "trtype": "TCP", 00:19:22.582 "adrfam": "IPv4", 00:19:22.582 "traddr": "10.0.0.1", 00:19:22.582 "trsvcid": "48716" 00:19:22.582 }, 00:19:22.582 "auth": { 00:19:22.582 "state": "completed", 00:19:22.582 "digest": "sha256", 00:19:22.582 "dhgroup": "null" 00:19:22.582 } 00:19:22.582 } 00:19:22.582 ]' 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.582 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.839 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:19:23.771 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.771 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.771 13:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.771 13:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.771 13:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.771 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.771 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:23.771 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.028 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.286 00:19:24.286 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.286 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.286 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.544 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.544 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.544 13:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.544 13:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.544 13:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.544 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.544 { 00:19:24.544 "cntlid": 3, 00:19:24.544 "qid": 0, 00:19:24.544 "state": "enabled", 00:19:24.544 "listen_address": { 00:19:24.544 "trtype": "TCP", 00:19:24.544 "adrfam": "IPv4", 00:19:24.544 "traddr": "10.0.0.2", 00:19:24.544 "trsvcid": "4420" 00:19:24.544 }, 00:19:24.544 "peer_address": { 00:19:24.544 "trtype": "TCP", 00:19:24.544 "adrfam": "IPv4", 00:19:24.544 "traddr": "10.0.0.1", 00:19:24.544 "trsvcid": "54402" 00:19:24.544 }, 00:19:24.544 "auth": { 00:19:24.544 "state": "completed", 00:19:24.544 "digest": "sha256", 00:19:24.544 "dhgroup": "null" 00:19:24.544 } 00:19:24.544 } 00:19:24.544 ]' 00:19:24.544 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.801 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.802 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.802 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:24.802 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.802 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.802 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.802 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.060 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:19:25.992 13:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.992 13:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.992 13:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.992 13:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.992 13:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.992 13:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.992 13:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.992 13:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.249 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.506 00:19:26.507 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.507 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.507 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.764 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.764 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.764 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.764 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.764 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.764 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.764 { 00:19:26.764 "cntlid": 5, 00:19:26.764 "qid": 0, 00:19:26.764 "state": "enabled", 00:19:26.764 "listen_address": { 00:19:26.764 "trtype": "TCP", 00:19:26.764 "adrfam": "IPv4", 00:19:26.764 "traddr": "10.0.0.2", 00:19:26.764 "trsvcid": "4420" 00:19:26.764 }, 00:19:26.764 "peer_address": { 00:19:26.764 "trtype": "TCP", 00:19:26.764 "adrfam": "IPv4", 00:19:26.764 "traddr": "10.0.0.1", 00:19:26.764 "trsvcid": "54426" 00:19:26.764 }, 00:19:26.764 "auth": { 00:19:26.764 "state": "completed", 00:19:26.764 "digest": "sha256", 00:19:26.764 "dhgroup": "null" 00:19:26.764 } 00:19:26.764 } 00:19:26.764 ]' 00:19:26.764 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.764 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.764 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.021 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:27.021 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.021 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.021 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.021 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.279 13:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:19:28.210 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.210 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.210 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.210 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.210 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.210 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.210 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.210 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.467 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.724 00:19:28.724 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.724 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.724 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.981 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.981 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.981 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.981 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.981 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.981 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.981 { 00:19:28.981 "cntlid": 7, 00:19:28.981 "qid": 0, 00:19:28.981 "state": "enabled", 00:19:28.981 "listen_address": { 00:19:28.981 "trtype": "TCP", 00:19:28.981 "adrfam": "IPv4", 00:19:28.981 "traddr": "10.0.0.2", 00:19:28.981 "trsvcid": "4420" 00:19:28.981 }, 00:19:28.981 "peer_address": { 00:19:28.981 "trtype": "TCP", 00:19:28.981 "adrfam": "IPv4", 00:19:28.981 "traddr": "10.0.0.1", 00:19:28.981 "trsvcid": "54462" 00:19:28.981 }, 00:19:28.981 "auth": { 00:19:28.981 "state": "completed", 00:19:28.981 "digest": "sha256", 00:19:28.981 "dhgroup": "null" 00:19:28.981 } 00:19:28.981 } 00:19:28.981 ]' 00:19:28.981 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.981 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.981 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.238 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:29.238 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.238 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.238 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.238 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.497 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:19:30.429 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.429 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.429 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.429 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.429 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.429 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.429 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.429 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.429 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.687 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.945 00:19:30.945 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.945 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.945 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.202 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.202 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.202 13:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.202 13:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.202 13:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.202 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.202 { 00:19:31.202 "cntlid": 9, 00:19:31.202 "qid": 0, 00:19:31.202 "state": "enabled", 00:19:31.202 "listen_address": { 00:19:31.202 "trtype": "TCP", 00:19:31.202 "adrfam": "IPv4", 00:19:31.202 "traddr": "10.0.0.2", 00:19:31.202 "trsvcid": "4420" 00:19:31.202 }, 00:19:31.202 "peer_address": { 00:19:31.202 "trtype": "TCP", 00:19:31.202 "adrfam": "IPv4", 00:19:31.202 "traddr": "10.0.0.1", 00:19:31.202 "trsvcid": "54500" 00:19:31.202 }, 00:19:31.202 "auth": { 00:19:31.202 "state": "completed", 00:19:31.202 "digest": "sha256", 00:19:31.202 "dhgroup": "ffdhe2048" 00:19:31.202 } 00:19:31.202 } 00:19:31.202 ]' 00:19:31.202 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.202 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.202 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.203 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.203 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.203 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.203 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.203 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.461 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:19:32.393 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.393 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.393 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.393 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.650 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.650 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.650 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.650 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.908 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.164 00:19:33.164 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.164 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.164 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.434 { 00:19:33.434 "cntlid": 11, 00:19:33.434 "qid": 0, 00:19:33.434 "state": "enabled", 00:19:33.434 "listen_address": { 00:19:33.434 "trtype": "TCP", 00:19:33.434 "adrfam": "IPv4", 00:19:33.434 "traddr": "10.0.0.2", 00:19:33.434 "trsvcid": "4420" 00:19:33.434 }, 00:19:33.434 "peer_address": { 00:19:33.434 "trtype": "TCP", 00:19:33.434 "adrfam": "IPv4", 00:19:33.434 "traddr": "10.0.0.1", 00:19:33.434 "trsvcid": "54538" 00:19:33.434 }, 00:19:33.434 "auth": { 00:19:33.434 "state": "completed", 00:19:33.434 "digest": "sha256", 00:19:33.434 "dhgroup": "ffdhe2048" 00:19:33.434 } 00:19:33.434 } 00:19:33.434 ]' 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.434 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.734 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:19:34.667 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.667 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.667 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.667 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.667 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.667 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.667 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.667 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.925 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.183 00:19:35.183 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.183 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.183 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.452 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.452 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.452 13:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.452 13:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.452 13:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.452 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.452 { 00:19:35.452 "cntlid": 13, 00:19:35.452 "qid": 0, 00:19:35.452 "state": "enabled", 00:19:35.452 "listen_address": { 00:19:35.452 "trtype": "TCP", 00:19:35.452 "adrfam": "IPv4", 00:19:35.452 "traddr": "10.0.0.2", 00:19:35.452 "trsvcid": "4420" 00:19:35.452 }, 00:19:35.452 "peer_address": { 00:19:35.452 "trtype": "TCP", 00:19:35.452 "adrfam": "IPv4", 00:19:35.452 "traddr": "10.0.0.1", 00:19:35.452 "trsvcid": "42802" 00:19:35.452 }, 00:19:35.452 "auth": { 00:19:35.452 "state": "completed", 00:19:35.452 "digest": "sha256", 00:19:35.452 "dhgroup": "ffdhe2048" 00:19:35.452 } 00:19:35.452 } 00:19:35.452 ]' 00:19:35.452 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.710 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.710 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.710 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.710 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.710 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.710 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.710 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.968 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:19:36.901 13:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.901 13:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.901 13:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.901 13:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.901 13:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.901 13:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.901 13:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.901 13:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.159 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.417 00:19:37.417 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.417 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.417 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.675 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.675 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.675 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.675 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.675 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.675 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.675 { 00:19:37.675 "cntlid": 15, 00:19:37.675 "qid": 0, 00:19:37.675 "state": "enabled", 00:19:37.675 "listen_address": { 00:19:37.675 "trtype": "TCP", 00:19:37.675 "adrfam": "IPv4", 00:19:37.675 "traddr": "10.0.0.2", 00:19:37.675 "trsvcid": "4420" 00:19:37.675 }, 00:19:37.675 "peer_address": { 00:19:37.675 "trtype": "TCP", 00:19:37.675 "adrfam": "IPv4", 00:19:37.675 "traddr": "10.0.0.1", 00:19:37.675 "trsvcid": "42832" 00:19:37.675 }, 00:19:37.675 "auth": { 00:19:37.675 "state": "completed", 00:19:37.675 "digest": "sha256", 00:19:37.675 "dhgroup": "ffdhe2048" 00:19:37.675 } 00:19:37.675 } 00:19:37.675 ]' 00:19:37.675 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.675 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.675 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.933 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.933 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.933 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.933 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.933 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.190 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:19:39.124 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.124 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.124 13:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.124 13:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.124 13:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.124 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.124 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.124 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:39.124 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.382 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.640 00:19:39.640 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.640 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.640 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.898 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.898 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.898 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.898 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.898 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.898 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.898 { 00:19:39.898 "cntlid": 17, 00:19:39.898 "qid": 0, 00:19:39.898 "state": "enabled", 00:19:39.898 "listen_address": { 00:19:39.898 "trtype": "TCP", 00:19:39.898 "adrfam": "IPv4", 00:19:39.898 "traddr": "10.0.0.2", 00:19:39.898 "trsvcid": "4420" 00:19:39.898 }, 00:19:39.898 "peer_address": { 00:19:39.898 "trtype": "TCP", 00:19:39.898 "adrfam": "IPv4", 00:19:39.898 "traddr": "10.0.0.1", 00:19:39.898 "trsvcid": "42872" 00:19:39.898 }, 00:19:39.898 "auth": { 00:19:39.898 "state": "completed", 00:19:39.898 "digest": "sha256", 00:19:39.898 "dhgroup": "ffdhe3072" 00:19:39.898 } 00:19:39.898 } 00:19:39.898 ]' 00:19:39.898 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.898 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.898 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.156 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.156 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.156 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.156 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.156 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.414 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:19:41.348 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.348 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.348 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.348 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.348 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.348 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.348 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.348 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.607 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.865 00:19:41.865 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.865 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.865 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.123 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.123 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.123 13:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.123 13:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.123 13:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.123 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.123 { 00:19:42.123 "cntlid": 19, 00:19:42.123 "qid": 0, 00:19:42.123 "state": "enabled", 00:19:42.123 "listen_address": { 00:19:42.123 "trtype": "TCP", 00:19:42.123 "adrfam": "IPv4", 00:19:42.123 "traddr": "10.0.0.2", 00:19:42.123 "trsvcid": "4420" 00:19:42.123 }, 00:19:42.123 "peer_address": { 00:19:42.123 "trtype": "TCP", 00:19:42.123 "adrfam": "IPv4", 00:19:42.123 "traddr": "10.0.0.1", 00:19:42.123 "trsvcid": "42914" 00:19:42.123 }, 00:19:42.123 "auth": { 00:19:42.123 "state": "completed", 00:19:42.123 "digest": "sha256", 00:19:42.123 "dhgroup": "ffdhe3072" 00:19:42.123 } 00:19:42.123 } 00:19:42.123 ]' 00:19:42.123 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.123 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.123 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.380 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.380 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.380 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.380 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.380 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.638 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:19:43.572 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.572 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.572 13:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.572 13:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.572 13:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.572 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.572 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.572 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.830 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.088 00:19:44.088 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.088 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.088 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.346 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.346 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.346 13:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.346 13:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.346 13:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.346 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.346 { 00:19:44.346 "cntlid": 21, 00:19:44.346 "qid": 0, 00:19:44.346 "state": "enabled", 00:19:44.346 "listen_address": { 00:19:44.346 "trtype": "TCP", 00:19:44.346 "adrfam": "IPv4", 00:19:44.346 "traddr": "10.0.0.2", 00:19:44.346 "trsvcid": "4420" 00:19:44.346 }, 00:19:44.346 "peer_address": { 00:19:44.346 "trtype": "TCP", 00:19:44.346 "adrfam": "IPv4", 00:19:44.346 "traddr": "10.0.0.1", 00:19:44.346 "trsvcid": "48882" 00:19:44.346 }, 00:19:44.346 "auth": { 00:19:44.346 "state": "completed", 00:19:44.346 "digest": "sha256", 00:19:44.346 "dhgroup": "ffdhe3072" 00:19:44.346 } 00:19:44.346 } 00:19:44.346 ]' 00:19:44.346 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.604 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.604 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.604 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.604 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.604 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.604 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.604 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.861 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:19:45.798 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.798 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.798 13:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.798 13:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.798 13:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.798 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.798 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.798 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.055 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.312 00:19:46.312 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.312 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.312 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.570 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.570 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.570 13:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.570 13:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.570 13:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.570 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.570 { 00:19:46.570 "cntlid": 23, 00:19:46.570 "qid": 0, 00:19:46.570 "state": "enabled", 00:19:46.570 "listen_address": { 00:19:46.570 "trtype": "TCP", 00:19:46.570 "adrfam": "IPv4", 00:19:46.570 "traddr": "10.0.0.2", 00:19:46.570 "trsvcid": "4420" 00:19:46.570 }, 00:19:46.570 "peer_address": { 00:19:46.570 "trtype": "TCP", 00:19:46.570 "adrfam": "IPv4", 00:19:46.570 "traddr": "10.0.0.1", 00:19:46.570 "trsvcid": "48916" 00:19:46.570 }, 00:19:46.570 "auth": { 00:19:46.570 "state": "completed", 00:19:46.570 "digest": "sha256", 00:19:46.570 "dhgroup": "ffdhe3072" 00:19:46.570 } 00:19:46.570 } 00:19:46.570 ]' 00:19:46.570 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.828 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.828 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.828 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.828 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.828 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.828 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.828 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.085 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:19:48.018 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.018 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.018 13:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.018 13:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.018 13:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.018 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.018 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.018 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.018 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.276 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.840 00:19:48.840 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.840 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.840 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.097 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.097 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.097 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.097 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.098 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.098 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.098 { 00:19:49.098 "cntlid": 25, 00:19:49.098 "qid": 0, 00:19:49.098 "state": "enabled", 00:19:49.098 "listen_address": { 00:19:49.098 "trtype": "TCP", 00:19:49.098 "adrfam": "IPv4", 00:19:49.098 "traddr": "10.0.0.2", 00:19:49.098 "trsvcid": "4420" 00:19:49.098 }, 00:19:49.098 "peer_address": { 00:19:49.098 "trtype": "TCP", 00:19:49.098 "adrfam": "IPv4", 00:19:49.098 "traddr": "10.0.0.1", 00:19:49.098 "trsvcid": "48942" 00:19:49.098 }, 00:19:49.098 "auth": { 00:19:49.098 "state": "completed", 00:19:49.098 "digest": "sha256", 00:19:49.098 "dhgroup": "ffdhe4096" 00:19:49.098 } 00:19:49.098 } 00:19:49.098 ]' 00:19:49.098 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.098 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.098 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.098 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:49.098 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.098 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.098 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.098 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.355 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:19:50.287 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.287 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.287 13:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.287 13:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.545 13:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.545 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.545 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.545 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.828 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.086 00:19:51.086 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.086 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.086 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.342 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.342 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.342 13:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.342 13:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.342 13:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.342 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.342 { 00:19:51.342 "cntlid": 27, 00:19:51.342 "qid": 0, 00:19:51.342 "state": "enabled", 00:19:51.342 "listen_address": { 00:19:51.342 "trtype": "TCP", 00:19:51.342 "adrfam": "IPv4", 00:19:51.342 "traddr": "10.0.0.2", 00:19:51.342 "trsvcid": "4420" 00:19:51.342 }, 00:19:51.342 "peer_address": { 00:19:51.342 "trtype": "TCP", 00:19:51.342 "adrfam": "IPv4", 00:19:51.342 "traddr": "10.0.0.1", 00:19:51.342 "trsvcid": "48974" 00:19:51.342 }, 00:19:51.342 "auth": { 00:19:51.342 "state": "completed", 00:19:51.342 "digest": "sha256", 00:19:51.342 "dhgroup": "ffdhe4096" 00:19:51.342 } 00:19:51.342 } 00:19:51.342 ]' 00:19:51.342 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.342 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.342 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.342 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.342 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.599 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.599 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.599 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.857 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:19:52.789 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.789 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.789 13:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.789 13:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.789 13:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.789 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.789 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.789 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.047 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.613 00:19:53.613 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.613 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.613 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.871 { 00:19:53.871 "cntlid": 29, 00:19:53.871 "qid": 0, 00:19:53.871 "state": "enabled", 00:19:53.871 "listen_address": { 00:19:53.871 "trtype": "TCP", 00:19:53.871 "adrfam": "IPv4", 00:19:53.871 "traddr": "10.0.0.2", 00:19:53.871 "trsvcid": "4420" 00:19:53.871 }, 00:19:53.871 "peer_address": { 00:19:53.871 "trtype": "TCP", 00:19:53.871 "adrfam": "IPv4", 00:19:53.871 "traddr": "10.0.0.1", 00:19:53.871 "trsvcid": "47224" 00:19:53.871 }, 00:19:53.871 "auth": { 00:19:53.871 "state": "completed", 00:19:53.871 "digest": "sha256", 00:19:53.871 "dhgroup": "ffdhe4096" 00:19:53.871 } 00:19:53.871 } 00:19:53.871 ]' 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.871 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.129 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:19:55.065 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.065 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.065 13:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.065 13:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.065 13:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.066 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.066 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.066 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.324 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.891 00:19:55.891 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.891 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.891 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.891 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.891 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.891 13:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.891 13:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.150 13:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.150 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.150 { 00:19:56.150 "cntlid": 31, 00:19:56.150 "qid": 0, 00:19:56.150 "state": "enabled", 00:19:56.150 "listen_address": { 00:19:56.150 "trtype": "TCP", 00:19:56.150 "adrfam": "IPv4", 00:19:56.150 "traddr": "10.0.0.2", 00:19:56.150 "trsvcid": "4420" 00:19:56.150 }, 00:19:56.150 "peer_address": { 00:19:56.150 "trtype": "TCP", 00:19:56.150 "adrfam": "IPv4", 00:19:56.150 "traddr": "10.0.0.1", 00:19:56.150 "trsvcid": "47254" 00:19:56.150 }, 00:19:56.150 "auth": { 00:19:56.150 "state": "completed", 00:19:56.150 "digest": "sha256", 00:19:56.150 "dhgroup": "ffdhe4096" 00:19:56.150 } 00:19:56.150 } 00:19:56.150 ]' 00:19:56.150 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.150 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.150 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.150 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.150 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.150 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.150 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.150 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.410 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:19:57.348 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.348 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.348 13:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.348 13:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.348 13:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.348 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.348 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.348 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.348 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.607 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.174 00:19:58.174 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.174 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.174 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.431 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.431 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.431 13:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.431 13:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.431 13:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.431 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.431 { 00:19:58.431 "cntlid": 33, 00:19:58.431 "qid": 0, 00:19:58.431 "state": "enabled", 00:19:58.431 "listen_address": { 00:19:58.431 "trtype": "TCP", 00:19:58.431 "adrfam": "IPv4", 00:19:58.431 "traddr": "10.0.0.2", 00:19:58.431 "trsvcid": "4420" 00:19:58.431 }, 00:19:58.431 "peer_address": { 00:19:58.431 "trtype": "TCP", 00:19:58.431 "adrfam": "IPv4", 00:19:58.431 "traddr": "10.0.0.1", 00:19:58.431 "trsvcid": "47276" 00:19:58.431 }, 00:19:58.431 "auth": { 00:19:58.431 "state": "completed", 00:19:58.431 "digest": "sha256", 00:19:58.431 "dhgroup": "ffdhe6144" 00:19:58.431 } 00:19:58.431 } 00:19:58.431 ]' 00:19:58.431 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.431 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.431 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.690 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.690 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.690 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.690 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.690 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.949 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:19:59.883 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.883 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.883 13:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.883 13:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.883 13:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.883 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.883 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.883 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.141 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.708 00:20:00.708 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.708 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.708 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.966 { 00:20:00.966 "cntlid": 35, 00:20:00.966 "qid": 0, 00:20:00.966 "state": "enabled", 00:20:00.966 "listen_address": { 00:20:00.966 "trtype": "TCP", 00:20:00.966 "adrfam": "IPv4", 00:20:00.966 "traddr": "10.0.0.2", 00:20:00.966 "trsvcid": "4420" 00:20:00.966 }, 00:20:00.966 "peer_address": { 00:20:00.966 "trtype": "TCP", 00:20:00.966 "adrfam": "IPv4", 00:20:00.966 "traddr": "10.0.0.1", 00:20:00.966 "trsvcid": "47284" 00:20:00.966 }, 00:20:00.966 "auth": { 00:20:00.966 "state": "completed", 00:20:00.966 "digest": "sha256", 00:20:00.966 "dhgroup": "ffdhe6144" 00:20:00.966 } 00:20:00.966 } 00:20:00.966 ]' 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.966 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.226 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:20:02.163 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.163 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.163 13:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.163 13:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.163 13:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.163 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.163 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.163 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.420 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.984 00:20:02.984 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.984 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.984 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.243 { 00:20:03.243 "cntlid": 37, 00:20:03.243 "qid": 0, 00:20:03.243 "state": "enabled", 00:20:03.243 "listen_address": { 00:20:03.243 "trtype": "TCP", 00:20:03.243 "adrfam": "IPv4", 00:20:03.243 "traddr": "10.0.0.2", 00:20:03.243 "trsvcid": "4420" 00:20:03.243 }, 00:20:03.243 "peer_address": { 00:20:03.243 "trtype": "TCP", 00:20:03.243 "adrfam": "IPv4", 00:20:03.243 "traddr": "10.0.0.1", 00:20:03.243 "trsvcid": "47304" 00:20:03.243 }, 00:20:03.243 "auth": { 00:20:03.243 "state": "completed", 00:20:03.243 "digest": "sha256", 00:20:03.243 "dhgroup": "ffdhe6144" 00:20:03.243 } 00:20:03.243 } 00:20:03.243 ]' 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.243 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.502 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.878 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.879 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.446 00:20:05.446 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.446 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.446 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.704 { 00:20:05.704 "cntlid": 39, 00:20:05.704 "qid": 0, 00:20:05.704 "state": "enabled", 00:20:05.704 "listen_address": { 00:20:05.704 "trtype": "TCP", 00:20:05.704 "adrfam": "IPv4", 00:20:05.704 "traddr": "10.0.0.2", 00:20:05.704 "trsvcid": "4420" 00:20:05.704 }, 00:20:05.704 "peer_address": { 00:20:05.704 "trtype": "TCP", 00:20:05.704 "adrfam": "IPv4", 00:20:05.704 "traddr": "10.0.0.1", 00:20:05.704 "trsvcid": "40074" 00:20:05.704 }, 00:20:05.704 "auth": { 00:20:05.704 "state": "completed", 00:20:05.704 "digest": "sha256", 00:20:05.704 "dhgroup": "ffdhe6144" 00:20:05.704 } 00:20:05.704 } 00:20:05.704 ]' 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.704 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.962 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:20:06.898 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.898 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.898 13:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.898 13:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.898 13:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.898 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.898 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.898 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.898 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.156 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.096 00:20:08.096 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.096 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.096 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.354 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.354 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.354 13:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.354 13:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.354 13:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.354 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.354 { 00:20:08.354 "cntlid": 41, 00:20:08.354 "qid": 0, 00:20:08.354 "state": "enabled", 00:20:08.354 "listen_address": { 00:20:08.354 "trtype": "TCP", 00:20:08.354 "adrfam": "IPv4", 00:20:08.354 "traddr": "10.0.0.2", 00:20:08.354 "trsvcid": "4420" 00:20:08.354 }, 00:20:08.354 "peer_address": { 00:20:08.354 "trtype": "TCP", 00:20:08.354 "adrfam": "IPv4", 00:20:08.354 "traddr": "10.0.0.1", 00:20:08.354 "trsvcid": "40092" 00:20:08.354 }, 00:20:08.354 "auth": { 00:20:08.354 "state": "completed", 00:20:08.354 "digest": "sha256", 00:20:08.354 "dhgroup": "ffdhe8192" 00:20:08.354 } 00:20:08.354 } 00:20:08.354 ]' 00:20:08.354 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.354 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.354 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.354 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.354 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.613 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.613 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.614 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.614 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:20:09.583 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.583 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.583 13:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.583 13:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.583 13:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.583 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.583 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.583 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.841 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.779 00:20:10.779 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.779 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.779 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.037 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.037 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.037 13:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.037 13:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.037 13:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.037 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.037 { 00:20:11.037 "cntlid": 43, 00:20:11.037 "qid": 0, 00:20:11.037 "state": "enabled", 00:20:11.037 "listen_address": { 00:20:11.037 "trtype": "TCP", 00:20:11.037 "adrfam": "IPv4", 00:20:11.037 "traddr": "10.0.0.2", 00:20:11.037 "trsvcid": "4420" 00:20:11.037 }, 00:20:11.037 "peer_address": { 00:20:11.037 "trtype": "TCP", 00:20:11.037 "adrfam": "IPv4", 00:20:11.037 "traddr": "10.0.0.1", 00:20:11.037 "trsvcid": "40114" 00:20:11.037 }, 00:20:11.037 "auth": { 00:20:11.037 "state": "completed", 00:20:11.037 "digest": "sha256", 00:20:11.037 "dhgroup": "ffdhe8192" 00:20:11.037 } 00:20:11.037 } 00:20:11.037 ]' 00:20:11.037 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.294 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.294 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.294 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.294 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.294 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.294 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.294 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.554 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:20:12.491 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.491 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.491 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.491 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.491 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.491 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.491 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.491 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.749 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.685 00:20:13.685 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.685 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.685 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.942 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.943 { 00:20:13.943 "cntlid": 45, 00:20:13.943 "qid": 0, 00:20:13.943 "state": "enabled", 00:20:13.943 "listen_address": { 00:20:13.943 "trtype": "TCP", 00:20:13.943 "adrfam": "IPv4", 00:20:13.943 "traddr": "10.0.0.2", 00:20:13.943 "trsvcid": "4420" 00:20:13.943 }, 00:20:13.943 "peer_address": { 00:20:13.943 "trtype": "TCP", 00:20:13.943 "adrfam": "IPv4", 00:20:13.943 "traddr": "10.0.0.1", 00:20:13.943 "trsvcid": "40144" 00:20:13.943 }, 00:20:13.943 "auth": { 00:20:13.943 "state": "completed", 00:20:13.943 "digest": "sha256", 00:20:13.943 "dhgroup": "ffdhe8192" 00:20:13.943 } 00:20:13.943 } 00:20:13.943 ]' 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.943 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.200 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:20:15.132 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.132 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.132 13:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.132 13:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.132 13:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.132 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.132 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.132 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.389 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.322 00:20:16.322 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.322 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.322 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.580 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.580 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.580 13:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.580 13:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.581 13:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.581 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.581 { 00:20:16.581 "cntlid": 47, 00:20:16.581 "qid": 0, 00:20:16.581 "state": "enabled", 00:20:16.581 "listen_address": { 00:20:16.581 "trtype": "TCP", 00:20:16.581 "adrfam": "IPv4", 00:20:16.581 "traddr": "10.0.0.2", 00:20:16.581 "trsvcid": "4420" 00:20:16.581 }, 00:20:16.581 "peer_address": { 00:20:16.581 "trtype": "TCP", 00:20:16.581 "adrfam": "IPv4", 00:20:16.581 "traddr": "10.0.0.1", 00:20:16.581 "trsvcid": "34908" 00:20:16.581 }, 00:20:16.581 "auth": { 00:20:16.581 "state": "completed", 00:20:16.581 "digest": "sha256", 00:20:16.581 "dhgroup": "ffdhe8192" 00:20:16.581 } 00:20:16.581 } 00:20:16.581 ]' 00:20:16.581 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.581 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.581 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.581 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.581 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.581 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.581 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.581 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.839 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:20:18.213 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.213 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.213 13:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.213 13:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.213 13:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.213 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:18.213 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.213 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.213 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.213 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.213 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.472 00:20:18.472 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.472 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.472 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.730 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.730 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.730 13:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.730 13:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.730 13:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.730 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.730 { 00:20:18.730 "cntlid": 49, 00:20:18.730 "qid": 0, 00:20:18.730 "state": "enabled", 00:20:18.730 "listen_address": { 00:20:18.730 "trtype": "TCP", 00:20:18.730 "adrfam": "IPv4", 00:20:18.730 "traddr": "10.0.0.2", 00:20:18.730 "trsvcid": "4420" 00:20:18.730 }, 00:20:18.730 "peer_address": { 00:20:18.730 "trtype": "TCP", 00:20:18.730 "adrfam": "IPv4", 00:20:18.730 "traddr": "10.0.0.1", 00:20:18.730 "trsvcid": "34926" 00:20:18.730 }, 00:20:18.730 "auth": { 00:20:18.730 "state": "completed", 00:20:18.730 "digest": "sha384", 00:20:18.730 "dhgroup": "null" 00:20:18.730 } 00:20:18.730 } 00:20:18.730 ]' 00:20:18.730 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.730 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.730 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.730 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:18.730 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.988 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.988 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.988 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.988 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:20:19.921 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.921 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.921 13:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.921 13:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.921 13:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.921 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.921 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.921 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.180 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.747 00:20:20.747 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.747 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.747 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.747 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.747 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.747 13:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.747 13:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.747 13:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.747 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.747 { 00:20:20.747 "cntlid": 51, 00:20:20.747 "qid": 0, 00:20:20.747 "state": "enabled", 00:20:20.747 "listen_address": { 00:20:20.747 "trtype": "TCP", 00:20:20.747 "adrfam": "IPv4", 00:20:20.747 "traddr": "10.0.0.2", 00:20:20.747 "trsvcid": "4420" 00:20:20.747 }, 00:20:20.747 "peer_address": { 00:20:20.747 "trtype": "TCP", 00:20:20.747 "adrfam": "IPv4", 00:20:20.747 "traddr": "10.0.0.1", 00:20:20.747 "trsvcid": "34952" 00:20:20.747 }, 00:20:20.747 "auth": { 00:20:20.747 "state": "completed", 00:20:20.747 "digest": "sha384", 00:20:20.747 "dhgroup": "null" 00:20:20.747 } 00:20:20.747 } 00:20:20.747 ]' 00:20:20.747 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.004 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.005 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.005 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:21.005 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.005 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.005 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.005 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.262 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:20:22.198 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.198 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.198 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.198 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.198 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.198 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.198 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:22.198 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.456 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.714 00:20:22.714 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.714 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.714 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.972 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.972 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.972 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.972 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.972 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.972 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.972 { 00:20:22.972 "cntlid": 53, 00:20:22.972 "qid": 0, 00:20:22.972 "state": "enabled", 00:20:22.972 "listen_address": { 00:20:22.972 "trtype": "TCP", 00:20:22.972 "adrfam": "IPv4", 00:20:22.972 "traddr": "10.0.0.2", 00:20:22.972 "trsvcid": "4420" 00:20:22.972 }, 00:20:22.972 "peer_address": { 00:20:22.972 "trtype": "TCP", 00:20:22.972 "adrfam": "IPv4", 00:20:22.972 "traddr": "10.0.0.1", 00:20:22.972 "trsvcid": "34976" 00:20:22.972 }, 00:20:22.972 "auth": { 00:20:22.972 "state": "completed", 00:20:22.972 "digest": "sha384", 00:20:22.972 "dhgroup": "null" 00:20:22.972 } 00:20:22.972 } 00:20:22.972 ]' 00:20:22.972 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.972 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.972 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.229 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:23.229 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.229 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.229 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.229 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.488 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:20:24.422 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.422 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.422 13:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.422 13:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.422 13:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.422 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.422 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:24.422 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.680 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.939 00:20:24.939 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.939 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.939 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.197 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.197 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.197 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.197 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.197 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.197 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.197 { 00:20:25.197 "cntlid": 55, 00:20:25.197 "qid": 0, 00:20:25.197 "state": "enabled", 00:20:25.197 "listen_address": { 00:20:25.197 "trtype": "TCP", 00:20:25.197 "adrfam": "IPv4", 00:20:25.197 "traddr": "10.0.0.2", 00:20:25.197 "trsvcid": "4420" 00:20:25.197 }, 00:20:25.197 "peer_address": { 00:20:25.197 "trtype": "TCP", 00:20:25.197 "adrfam": "IPv4", 00:20:25.197 "traddr": "10.0.0.1", 00:20:25.197 "trsvcid": "46364" 00:20:25.197 }, 00:20:25.197 "auth": { 00:20:25.197 "state": "completed", 00:20:25.197 "digest": "sha384", 00:20:25.197 "dhgroup": "null" 00:20:25.197 } 00:20:25.197 } 00:20:25.197 ]' 00:20:25.197 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.197 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.197 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.197 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:25.197 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.455 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.455 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.455 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.712 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:20:26.645 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.645 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.645 13:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.645 13:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.645 13:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.645 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.645 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.645 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:26.645 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.925 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.194 00:20:27.194 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.194 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.194 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.452 { 00:20:27.452 "cntlid": 57, 00:20:27.452 "qid": 0, 00:20:27.452 "state": "enabled", 00:20:27.452 "listen_address": { 00:20:27.452 "trtype": "TCP", 00:20:27.452 "adrfam": "IPv4", 00:20:27.452 "traddr": "10.0.0.2", 00:20:27.452 "trsvcid": "4420" 00:20:27.452 }, 00:20:27.452 "peer_address": { 00:20:27.452 "trtype": "TCP", 00:20:27.452 "adrfam": "IPv4", 00:20:27.452 "traddr": "10.0.0.1", 00:20:27.452 "trsvcid": "46404" 00:20:27.452 }, 00:20:27.452 "auth": { 00:20:27.452 "state": "completed", 00:20:27.452 "digest": "sha384", 00:20:27.452 "dhgroup": "ffdhe2048" 00:20:27.452 } 00:20:27.452 } 00:20:27.452 ]' 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.452 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.711 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:20:28.646 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.904 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.904 13:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.904 13:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.904 13:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.904 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.904 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.904 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.161 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.418 00:20:29.418 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.418 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.418 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.694 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.694 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.694 13:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.694 13:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.694 13:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.694 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.694 { 00:20:29.694 "cntlid": 59, 00:20:29.694 "qid": 0, 00:20:29.694 "state": "enabled", 00:20:29.694 "listen_address": { 00:20:29.694 "trtype": "TCP", 00:20:29.694 "adrfam": "IPv4", 00:20:29.694 "traddr": "10.0.0.2", 00:20:29.694 "trsvcid": "4420" 00:20:29.694 }, 00:20:29.694 "peer_address": { 00:20:29.694 "trtype": "TCP", 00:20:29.694 "adrfam": "IPv4", 00:20:29.694 "traddr": "10.0.0.1", 00:20:29.694 "trsvcid": "46430" 00:20:29.694 }, 00:20:29.694 "auth": { 00:20:29.694 "state": "completed", 00:20:29.694 "digest": "sha384", 00:20:29.694 "dhgroup": "ffdhe2048" 00:20:29.694 } 00:20:29.694 } 00:20:29.694 ]' 00:20:29.694 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.694 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.694 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.694 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:29.694 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.953 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.953 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.953 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.212 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:20:31.142 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.143 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.143 13:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.143 13:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.143 13:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.143 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.143 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.143 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.401 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.659 00:20:31.659 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.659 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.659 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.226 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.226 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.226 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.226 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.226 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.226 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.226 { 00:20:32.226 "cntlid": 61, 00:20:32.226 "qid": 0, 00:20:32.226 "state": "enabled", 00:20:32.226 "listen_address": { 00:20:32.226 "trtype": "TCP", 00:20:32.226 "adrfam": "IPv4", 00:20:32.226 "traddr": "10.0.0.2", 00:20:32.226 "trsvcid": "4420" 00:20:32.226 }, 00:20:32.226 "peer_address": { 00:20:32.226 "trtype": "TCP", 00:20:32.226 "adrfam": "IPv4", 00:20:32.226 "traddr": "10.0.0.1", 00:20:32.226 "trsvcid": "46446" 00:20:32.226 }, 00:20:32.226 "auth": { 00:20:32.226 "state": "completed", 00:20:32.226 "digest": "sha384", 00:20:32.226 "dhgroup": "ffdhe2048" 00:20:32.226 } 00:20:32.226 } 00:20:32.226 ]' 00:20:32.226 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.226 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.226 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.226 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.226 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.226 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.226 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.226 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.484 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:20:33.424 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.424 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.424 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.424 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.424 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.424 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.424 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.424 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.681 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.938 00:20:33.939 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.939 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.939 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.196 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.196 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.196 13:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.196 13:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.196 13:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.196 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.196 { 00:20:34.196 "cntlid": 63, 00:20:34.196 "qid": 0, 00:20:34.196 "state": "enabled", 00:20:34.196 "listen_address": { 00:20:34.196 "trtype": "TCP", 00:20:34.196 "adrfam": "IPv4", 00:20:34.196 "traddr": "10.0.0.2", 00:20:34.196 "trsvcid": "4420" 00:20:34.196 }, 00:20:34.196 "peer_address": { 00:20:34.196 "trtype": "TCP", 00:20:34.196 "adrfam": "IPv4", 00:20:34.196 "traddr": "10.0.0.1", 00:20:34.196 "trsvcid": "49388" 00:20:34.196 }, 00:20:34.196 "auth": { 00:20:34.196 "state": "completed", 00:20:34.196 "digest": "sha384", 00:20:34.196 "dhgroup": "ffdhe2048" 00:20:34.196 } 00:20:34.196 } 00:20:34.196 ]' 00:20:34.196 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.196 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.196 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.454 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.454 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.454 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.454 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.454 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.712 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:20:35.649 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.649 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.649 13:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.649 13:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.649 13:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.649 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.649 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.649 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.649 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.906 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.164 00:20:36.164 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.164 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.164 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.423 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.423 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.423 13:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.423 13:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.423 13:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.423 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.423 { 00:20:36.423 "cntlid": 65, 00:20:36.423 "qid": 0, 00:20:36.423 "state": "enabled", 00:20:36.423 "listen_address": { 00:20:36.423 "trtype": "TCP", 00:20:36.423 "adrfam": "IPv4", 00:20:36.423 "traddr": "10.0.0.2", 00:20:36.423 "trsvcid": "4420" 00:20:36.423 }, 00:20:36.423 "peer_address": { 00:20:36.423 "trtype": "TCP", 00:20:36.423 "adrfam": "IPv4", 00:20:36.423 "traddr": "10.0.0.1", 00:20:36.423 "trsvcid": "49408" 00:20:36.423 }, 00:20:36.423 "auth": { 00:20:36.423 "state": "completed", 00:20:36.423 "digest": "sha384", 00:20:36.423 "dhgroup": "ffdhe3072" 00:20:36.423 } 00:20:36.423 } 00:20:36.423 ]' 00:20:36.423 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.423 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.423 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.681 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:36.681 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.681 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.681 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.681 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.938 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:20:37.875 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.875 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.875 13:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.875 13:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.875 13:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.875 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.875 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.875 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.132 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.389 00:20:38.389 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.389 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.389 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.647 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.647 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.647 13:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.647 13:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.647 13:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.647 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.647 { 00:20:38.647 "cntlid": 67, 00:20:38.647 "qid": 0, 00:20:38.647 "state": "enabled", 00:20:38.647 "listen_address": { 00:20:38.647 "trtype": "TCP", 00:20:38.647 "adrfam": "IPv4", 00:20:38.647 "traddr": "10.0.0.2", 00:20:38.647 "trsvcid": "4420" 00:20:38.647 }, 00:20:38.647 "peer_address": { 00:20:38.647 "trtype": "TCP", 00:20:38.647 "adrfam": "IPv4", 00:20:38.647 "traddr": "10.0.0.1", 00:20:38.647 "trsvcid": "49440" 00:20:38.647 }, 00:20:38.647 "auth": { 00:20:38.647 "state": "completed", 00:20:38.647 "digest": "sha384", 00:20:38.647 "dhgroup": "ffdhe3072" 00:20:38.647 } 00:20:38.647 } 00:20:38.647 ]' 00:20:38.647 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.647 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.905 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.905 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:38.905 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.905 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.905 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.905 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.163 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:20:40.110 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.110 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.110 13:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.110 13:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.110 13:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.110 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.110 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.110 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.368 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.625 00:20:40.625 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.625 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.626 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.883 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.883 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.883 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.883 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.883 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.883 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.883 { 00:20:40.883 "cntlid": 69, 00:20:40.883 "qid": 0, 00:20:40.883 "state": "enabled", 00:20:40.883 "listen_address": { 00:20:40.883 "trtype": "TCP", 00:20:40.883 "adrfam": "IPv4", 00:20:40.883 "traddr": "10.0.0.2", 00:20:40.883 "trsvcid": "4420" 00:20:40.883 }, 00:20:40.883 "peer_address": { 00:20:40.883 "trtype": "TCP", 00:20:40.883 "adrfam": "IPv4", 00:20:40.883 "traddr": "10.0.0.1", 00:20:40.883 "trsvcid": "49456" 00:20:40.883 }, 00:20:40.883 "auth": { 00:20:40.883 "state": "completed", 00:20:40.883 "digest": "sha384", 00:20:40.883 "dhgroup": "ffdhe3072" 00:20:40.883 } 00:20:40.883 } 00:20:40.883 ]' 00:20:40.883 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.883 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.883 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.140 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.140 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.140 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.140 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.140 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.399 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:20:42.335 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.336 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.336 13:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.336 13:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.336 13:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.336 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.336 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.336 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.594 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.852 00:20:42.852 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.852 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.852 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.110 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.110 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.110 13:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.110 13:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.110 13:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.110 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.110 { 00:20:43.110 "cntlid": 71, 00:20:43.110 "qid": 0, 00:20:43.110 "state": "enabled", 00:20:43.110 "listen_address": { 00:20:43.110 "trtype": "TCP", 00:20:43.110 "adrfam": "IPv4", 00:20:43.110 "traddr": "10.0.0.2", 00:20:43.110 "trsvcid": "4420" 00:20:43.110 }, 00:20:43.110 "peer_address": { 00:20:43.110 "trtype": "TCP", 00:20:43.110 "adrfam": "IPv4", 00:20:43.110 "traddr": "10.0.0.1", 00:20:43.110 "trsvcid": "49482" 00:20:43.110 }, 00:20:43.110 "auth": { 00:20:43.110 "state": "completed", 00:20:43.110 "digest": "sha384", 00:20:43.110 "dhgroup": "ffdhe3072" 00:20:43.110 } 00:20:43.110 } 00:20:43.110 ]' 00:20:43.110 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.367 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.367 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.367 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:43.367 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.367 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.367 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.367 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.624 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:20:44.618 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.618 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.618 13:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.618 13:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.618 13:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.618 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.618 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.618 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:44.618 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.876 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.134 00:20:45.134 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.134 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.134 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.391 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.391 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.391 13:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.391 13:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.391 13:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.391 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.391 { 00:20:45.391 "cntlid": 73, 00:20:45.391 "qid": 0, 00:20:45.391 "state": "enabled", 00:20:45.391 "listen_address": { 00:20:45.391 "trtype": "TCP", 00:20:45.391 "adrfam": "IPv4", 00:20:45.391 "traddr": "10.0.0.2", 00:20:45.391 "trsvcid": "4420" 00:20:45.391 }, 00:20:45.391 "peer_address": { 00:20:45.391 "trtype": "TCP", 00:20:45.391 "adrfam": "IPv4", 00:20:45.391 "traddr": "10.0.0.1", 00:20:45.391 "trsvcid": "40846" 00:20:45.391 }, 00:20:45.391 "auth": { 00:20:45.391 "state": "completed", 00:20:45.391 "digest": "sha384", 00:20:45.391 "dhgroup": "ffdhe4096" 00:20:45.391 } 00:20:45.391 } 00:20:45.391 ]' 00:20:45.391 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.391 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.391 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.391 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.391 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.649 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.649 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.649 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.907 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:20:46.845 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.845 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.845 13:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.845 13:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.845 13:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.845 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.845 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.845 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.845 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:47.103 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.103 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.103 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:47.103 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:47.103 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.103 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.103 13:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.103 13:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.103 13:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.103 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.103 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.360 00:20:47.361 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.361 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.361 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.618 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.618 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.618 13:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.618 13:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.618 13:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.618 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.618 { 00:20:47.618 "cntlid": 75, 00:20:47.618 "qid": 0, 00:20:47.618 "state": "enabled", 00:20:47.618 "listen_address": { 00:20:47.618 "trtype": "TCP", 00:20:47.618 "adrfam": "IPv4", 00:20:47.618 "traddr": "10.0.0.2", 00:20:47.618 "trsvcid": "4420" 00:20:47.618 }, 00:20:47.618 "peer_address": { 00:20:47.618 "trtype": "TCP", 00:20:47.618 "adrfam": "IPv4", 00:20:47.618 "traddr": "10.0.0.1", 00:20:47.618 "trsvcid": "40864" 00:20:47.618 }, 00:20:47.618 "auth": { 00:20:47.618 "state": "completed", 00:20:47.618 "digest": "sha384", 00:20:47.618 "dhgroup": "ffdhe4096" 00:20:47.618 } 00:20:47.618 } 00:20:47.618 ]' 00:20:47.618 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.618 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.618 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.618 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.618 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.898 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.898 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.898 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.898 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:20:49.273 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.274 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.274 13:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.274 13:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.274 13:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.274 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.274 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.274 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.274 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.531 00:20:49.789 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.789 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.789 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.047 { 00:20:50.047 "cntlid": 77, 00:20:50.047 "qid": 0, 00:20:50.047 "state": "enabled", 00:20:50.047 "listen_address": { 00:20:50.047 "trtype": "TCP", 00:20:50.047 "adrfam": "IPv4", 00:20:50.047 "traddr": "10.0.0.2", 00:20:50.047 "trsvcid": "4420" 00:20:50.047 }, 00:20:50.047 "peer_address": { 00:20:50.047 "trtype": "TCP", 00:20:50.047 "adrfam": "IPv4", 00:20:50.047 "traddr": "10.0.0.1", 00:20:50.047 "trsvcid": "40898" 00:20:50.047 }, 00:20:50.047 "auth": { 00:20:50.047 "state": "completed", 00:20:50.047 "digest": "sha384", 00:20:50.047 "dhgroup": "ffdhe4096" 00:20:50.047 } 00:20:50.047 } 00:20:50.047 ]' 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.047 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.304 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:20:51.240 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.240 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.240 13:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.240 13:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.240 13:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.240 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.240 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.240 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.498 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.067 00:20:52.067 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.067 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.067 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.325 { 00:20:52.325 "cntlid": 79, 00:20:52.325 "qid": 0, 00:20:52.325 "state": "enabled", 00:20:52.325 "listen_address": { 00:20:52.325 "trtype": "TCP", 00:20:52.325 "adrfam": "IPv4", 00:20:52.325 "traddr": "10.0.0.2", 00:20:52.325 "trsvcid": "4420" 00:20:52.325 }, 00:20:52.325 "peer_address": { 00:20:52.325 "trtype": "TCP", 00:20:52.325 "adrfam": "IPv4", 00:20:52.325 "traddr": "10.0.0.1", 00:20:52.325 "trsvcid": "40912" 00:20:52.325 }, 00:20:52.325 "auth": { 00:20:52.325 "state": "completed", 00:20:52.325 "digest": "sha384", 00:20:52.325 "dhgroup": "ffdhe4096" 00:20:52.325 } 00:20:52.325 } 00:20:52.325 ]' 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.325 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.583 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:20:53.520 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.520 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.520 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.520 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.520 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.520 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.520 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.520 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.520 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.778 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:53.779 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.779 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.779 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:53.779 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.779 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.779 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.779 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.779 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.779 13:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.779 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.779 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.346 00:20:54.346 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.346 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.346 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.603 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.604 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.604 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.604 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.604 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.604 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.604 { 00:20:54.604 "cntlid": 81, 00:20:54.604 "qid": 0, 00:20:54.604 "state": "enabled", 00:20:54.604 "listen_address": { 00:20:54.604 "trtype": "TCP", 00:20:54.604 "adrfam": "IPv4", 00:20:54.604 "traddr": "10.0.0.2", 00:20:54.604 "trsvcid": "4420" 00:20:54.604 }, 00:20:54.604 "peer_address": { 00:20:54.604 "trtype": "TCP", 00:20:54.604 "adrfam": "IPv4", 00:20:54.604 "traddr": "10.0.0.1", 00:20:54.604 "trsvcid": "42652" 00:20:54.604 }, 00:20:54.604 "auth": { 00:20:54.604 "state": "completed", 00:20:54.604 "digest": "sha384", 00:20:54.604 "dhgroup": "ffdhe6144" 00:20:54.604 } 00:20:54.604 } 00:20:54.604 ]' 00:20:54.604 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.604 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.604 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.861 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.861 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.861 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.861 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.862 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.119 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:20:56.050 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.050 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.050 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.050 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.050 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.050 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.050 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.050 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.309 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.875 00:20:56.875 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.875 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.875 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.132 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.132 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.132 13:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.132 13:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.132 13:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.132 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.132 { 00:20:57.132 "cntlid": 83, 00:20:57.132 "qid": 0, 00:20:57.132 "state": "enabled", 00:20:57.132 "listen_address": { 00:20:57.132 "trtype": "TCP", 00:20:57.132 "adrfam": "IPv4", 00:20:57.132 "traddr": "10.0.0.2", 00:20:57.132 "trsvcid": "4420" 00:20:57.132 }, 00:20:57.132 "peer_address": { 00:20:57.132 "trtype": "TCP", 00:20:57.132 "adrfam": "IPv4", 00:20:57.132 "traddr": "10.0.0.1", 00:20:57.132 "trsvcid": "42686" 00:20:57.132 }, 00:20:57.132 "auth": { 00:20:57.132 "state": "completed", 00:20:57.132 "digest": "sha384", 00:20:57.132 "dhgroup": "ffdhe6144" 00:20:57.132 } 00:20:57.132 } 00:20:57.132 ]' 00:20:57.132 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.132 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.132 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.132 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.132 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.391 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.391 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.391 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.650 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:20:58.583 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.583 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.584 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.584 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.584 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.584 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.584 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.584 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.841 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:58.841 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.841 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.841 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:58.841 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.841 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.841 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.841 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.841 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.841 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.842 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.842 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.407 00:20:59.407 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.407 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.407 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.664 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.664 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.664 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.664 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.665 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.665 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.665 { 00:20:59.665 "cntlid": 85, 00:20:59.665 "qid": 0, 00:20:59.665 "state": "enabled", 00:20:59.665 "listen_address": { 00:20:59.665 "trtype": "TCP", 00:20:59.665 "adrfam": "IPv4", 00:20:59.665 "traddr": "10.0.0.2", 00:20:59.665 "trsvcid": "4420" 00:20:59.665 }, 00:20:59.665 "peer_address": { 00:20:59.665 "trtype": "TCP", 00:20:59.665 "adrfam": "IPv4", 00:20:59.665 "traddr": "10.0.0.1", 00:20:59.665 "trsvcid": "42710" 00:20:59.665 }, 00:20:59.665 "auth": { 00:20:59.665 "state": "completed", 00:20:59.665 "digest": "sha384", 00:20:59.665 "dhgroup": "ffdhe6144" 00:20:59.665 } 00:20:59.665 } 00:20:59.665 ]' 00:20:59.665 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.665 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.665 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.665 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.665 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.927 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.927 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.927 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.184 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:21:01.116 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.116 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.116 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.116 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.116 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.116 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.116 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.116 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.373 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:01.373 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.373 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.373 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:01.373 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.373 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.374 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:01.374 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.374 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.374 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.374 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.374 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.938 00:21:01.938 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.938 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.938 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.221 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.221 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.221 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.221 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.221 13:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.221 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.221 { 00:21:02.221 "cntlid": 87, 00:21:02.221 "qid": 0, 00:21:02.221 "state": "enabled", 00:21:02.221 "listen_address": { 00:21:02.221 "trtype": "TCP", 00:21:02.221 "adrfam": "IPv4", 00:21:02.221 "traddr": "10.0.0.2", 00:21:02.221 "trsvcid": "4420" 00:21:02.221 }, 00:21:02.221 "peer_address": { 00:21:02.221 "trtype": "TCP", 00:21:02.221 "adrfam": "IPv4", 00:21:02.221 "traddr": "10.0.0.1", 00:21:02.221 "trsvcid": "42742" 00:21:02.221 }, 00:21:02.221 "auth": { 00:21:02.221 "state": "completed", 00:21:02.221 "digest": "sha384", 00:21:02.221 "dhgroup": "ffdhe6144" 00:21:02.221 } 00:21:02.221 } 00:21:02.221 ]' 00:21:02.221 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.221 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.221 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.221 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:02.221 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.221 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.221 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.221 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.488 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:21:03.421 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.421 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.421 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.421 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.421 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.421 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.421 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.421 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.421 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.678 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.609 00:21:04.609 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.609 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.609 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.866 { 00:21:04.866 "cntlid": 89, 00:21:04.866 "qid": 0, 00:21:04.866 "state": "enabled", 00:21:04.866 "listen_address": { 00:21:04.866 "trtype": "TCP", 00:21:04.866 "adrfam": "IPv4", 00:21:04.866 "traddr": "10.0.0.2", 00:21:04.866 "trsvcid": "4420" 00:21:04.866 }, 00:21:04.866 "peer_address": { 00:21:04.866 "trtype": "TCP", 00:21:04.866 "adrfam": "IPv4", 00:21:04.866 "traddr": "10.0.0.1", 00:21:04.866 "trsvcid": "49976" 00:21:04.866 }, 00:21:04.866 "auth": { 00:21:04.866 "state": "completed", 00:21:04.866 "digest": "sha384", 00:21:04.866 "dhgroup": "ffdhe8192" 00:21:04.866 } 00:21:04.866 } 00:21:04.866 ]' 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.866 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.137 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:21:06.067 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.067 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.067 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.067 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.067 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.067 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.067 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.067 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.324 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.260 00:21:07.260 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.260 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.260 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.516 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.516 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.516 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.516 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.516 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.516 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.516 { 00:21:07.516 "cntlid": 91, 00:21:07.516 "qid": 0, 00:21:07.516 "state": "enabled", 00:21:07.516 "listen_address": { 00:21:07.516 "trtype": "TCP", 00:21:07.516 "adrfam": "IPv4", 00:21:07.516 "traddr": "10.0.0.2", 00:21:07.516 "trsvcid": "4420" 00:21:07.516 }, 00:21:07.516 "peer_address": { 00:21:07.516 "trtype": "TCP", 00:21:07.516 "adrfam": "IPv4", 00:21:07.516 "traddr": "10.0.0.1", 00:21:07.516 "trsvcid": "50000" 00:21:07.516 }, 00:21:07.516 "auth": { 00:21:07.516 "state": "completed", 00:21:07.516 "digest": "sha384", 00:21:07.516 "dhgroup": "ffdhe8192" 00:21:07.516 } 00:21:07.516 } 00:21:07.516 ]' 00:21:07.516 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.516 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.516 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.772 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.772 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.772 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.772 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.772 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.029 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:21:08.960 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.960 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.960 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.960 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.960 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.960 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.960 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.960 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.218 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.149 00:21:10.149 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.149 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.149 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.407 { 00:21:10.407 "cntlid": 93, 00:21:10.407 "qid": 0, 00:21:10.407 "state": "enabled", 00:21:10.407 "listen_address": { 00:21:10.407 "trtype": "TCP", 00:21:10.407 "adrfam": "IPv4", 00:21:10.407 "traddr": "10.0.0.2", 00:21:10.407 "trsvcid": "4420" 00:21:10.407 }, 00:21:10.407 "peer_address": { 00:21:10.407 "trtype": "TCP", 00:21:10.407 "adrfam": "IPv4", 00:21:10.407 "traddr": "10.0.0.1", 00:21:10.407 "trsvcid": "50020" 00:21:10.407 }, 00:21:10.407 "auth": { 00:21:10.407 "state": "completed", 00:21:10.407 "digest": "sha384", 00:21:10.407 "dhgroup": "ffdhe8192" 00:21:10.407 } 00:21:10.407 } 00:21:10.407 ]' 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.407 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.664 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:21:11.595 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.595 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.595 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.595 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.851 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.851 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.851 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.851 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.108 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.040 00:21:13.040 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.040 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.040 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.040 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.297 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.298 { 00:21:13.298 "cntlid": 95, 00:21:13.298 "qid": 0, 00:21:13.298 "state": "enabled", 00:21:13.298 "listen_address": { 00:21:13.298 "trtype": "TCP", 00:21:13.298 "adrfam": "IPv4", 00:21:13.298 "traddr": "10.0.0.2", 00:21:13.298 "trsvcid": "4420" 00:21:13.298 }, 00:21:13.298 "peer_address": { 00:21:13.298 "trtype": "TCP", 00:21:13.298 "adrfam": "IPv4", 00:21:13.298 "traddr": "10.0.0.1", 00:21:13.298 "trsvcid": "50052" 00:21:13.298 }, 00:21:13.298 "auth": { 00:21:13.298 "state": "completed", 00:21:13.298 "digest": "sha384", 00:21:13.298 "dhgroup": "ffdhe8192" 00:21:13.298 } 00:21:13.298 } 00:21:13.298 ]' 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.298 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.556 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:21:14.489 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.489 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.489 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.489 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.489 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.489 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:14.489 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.489 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.489 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:14.489 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:14.746 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:14.746 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.746 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:14.746 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:14.746 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:14.746 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.746 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.746 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.747 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.747 13:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.747 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.747 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.004 00:21:15.004 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.004 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.004 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.261 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.262 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.262 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.262 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.262 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.262 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.262 { 00:21:15.262 "cntlid": 97, 00:21:15.262 "qid": 0, 00:21:15.262 "state": "enabled", 00:21:15.262 "listen_address": { 00:21:15.262 "trtype": "TCP", 00:21:15.262 "adrfam": "IPv4", 00:21:15.262 "traddr": "10.0.0.2", 00:21:15.262 "trsvcid": "4420" 00:21:15.262 }, 00:21:15.262 "peer_address": { 00:21:15.262 "trtype": "TCP", 00:21:15.262 "adrfam": "IPv4", 00:21:15.262 "traddr": "10.0.0.1", 00:21:15.262 "trsvcid": "54320" 00:21:15.262 }, 00:21:15.262 "auth": { 00:21:15.262 "state": "completed", 00:21:15.262 "digest": "sha512", 00:21:15.262 "dhgroup": "null" 00:21:15.262 } 00:21:15.262 } 00:21:15.262 ]' 00:21:15.262 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.519 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.519 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.519 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:15.519 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.519 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.519 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.519 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.776 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:21:16.707 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.707 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.707 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.707 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.707 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.707 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.707 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:16.707 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.965 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.229 00:21:17.229 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.229 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.229 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.487 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.487 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.487 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.487 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.487 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.487 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.487 { 00:21:17.487 "cntlid": 99, 00:21:17.487 "qid": 0, 00:21:17.487 "state": "enabled", 00:21:17.487 "listen_address": { 00:21:17.487 "trtype": "TCP", 00:21:17.487 "adrfam": "IPv4", 00:21:17.487 "traddr": "10.0.0.2", 00:21:17.487 "trsvcid": "4420" 00:21:17.487 }, 00:21:17.487 "peer_address": { 00:21:17.487 "trtype": "TCP", 00:21:17.487 "adrfam": "IPv4", 00:21:17.487 "traddr": "10.0.0.1", 00:21:17.487 "trsvcid": "54356" 00:21:17.487 }, 00:21:17.487 "auth": { 00:21:17.487 "state": "completed", 00:21:17.487 "digest": "sha512", 00:21:17.487 "dhgroup": "null" 00:21:17.487 } 00:21:17.487 } 00:21:17.487 ]' 00:21:17.487 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.745 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.745 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.745 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:17.745 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.745 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.745 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.745 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.003 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:21:18.937 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.937 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.937 13:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.937 13:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.937 13:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.937 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.937 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:18.937 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.195 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.453 00:21:19.453 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.453 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.453 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.711 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.711 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.711 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.711 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.711 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.711 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.711 { 00:21:19.711 "cntlid": 101, 00:21:19.711 "qid": 0, 00:21:19.711 "state": "enabled", 00:21:19.711 "listen_address": { 00:21:19.711 "trtype": "TCP", 00:21:19.711 "adrfam": "IPv4", 00:21:19.711 "traddr": "10.0.0.2", 00:21:19.711 "trsvcid": "4420" 00:21:19.711 }, 00:21:19.711 "peer_address": { 00:21:19.711 "trtype": "TCP", 00:21:19.711 "adrfam": "IPv4", 00:21:19.711 "traddr": "10.0.0.1", 00:21:19.711 "trsvcid": "54390" 00:21:19.711 }, 00:21:19.711 "auth": { 00:21:19.711 "state": "completed", 00:21:19.711 "digest": "sha512", 00:21:19.711 "dhgroup": "null" 00:21:19.711 } 00:21:19.711 } 00:21:19.711 ]' 00:21:19.711 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.711 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.711 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.975 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:19.975 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.975 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.975 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.975 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.271 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:21:21.203 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.203 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.203 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.203 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.203 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.203 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.203 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.203 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.461 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.719 00:21:21.719 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.719 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.719 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.977 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.977 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.977 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.977 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.977 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.977 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.977 { 00:21:21.977 "cntlid": 103, 00:21:21.977 "qid": 0, 00:21:21.977 "state": "enabled", 00:21:21.977 "listen_address": { 00:21:21.977 "trtype": "TCP", 00:21:21.977 "adrfam": "IPv4", 00:21:21.977 "traddr": "10.0.0.2", 00:21:21.977 "trsvcid": "4420" 00:21:21.977 }, 00:21:21.977 "peer_address": { 00:21:21.977 "trtype": "TCP", 00:21:21.977 "adrfam": "IPv4", 00:21:21.977 "traddr": "10.0.0.1", 00:21:21.977 "trsvcid": "54424" 00:21:21.977 }, 00:21:21.977 "auth": { 00:21:21.977 "state": "completed", 00:21:21.977 "digest": "sha512", 00:21:21.977 "dhgroup": "null" 00:21:21.977 } 00:21:21.977 } 00:21:21.977 ]' 00:21:21.977 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.977 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.977 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.977 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:21.977 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.234 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.234 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.234 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.492 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:21:23.423 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.423 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.423 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.423 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.423 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.423 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.423 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.423 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.423 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.681 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:23.681 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.681 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.681 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:23.681 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:23.681 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.682 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.682 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.682 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.682 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.682 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.682 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.939 00:21:23.939 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.939 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.939 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.197 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.197 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.197 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.197 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.197 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.197 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.197 { 00:21:24.197 "cntlid": 105, 00:21:24.197 "qid": 0, 00:21:24.197 "state": "enabled", 00:21:24.197 "listen_address": { 00:21:24.197 "trtype": "TCP", 00:21:24.197 "adrfam": "IPv4", 00:21:24.197 "traddr": "10.0.0.2", 00:21:24.197 "trsvcid": "4420" 00:21:24.197 }, 00:21:24.197 "peer_address": { 00:21:24.197 "trtype": "TCP", 00:21:24.197 "adrfam": "IPv4", 00:21:24.197 "traddr": "10.0.0.1", 00:21:24.197 "trsvcid": "42768" 00:21:24.197 }, 00:21:24.197 "auth": { 00:21:24.197 "state": "completed", 00:21:24.197 "digest": "sha512", 00:21:24.197 "dhgroup": "ffdhe2048" 00:21:24.197 } 00:21:24.197 } 00:21:24.197 ]' 00:21:24.197 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.197 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.197 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.197 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:24.197 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.454 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.454 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.454 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.711 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:21:25.640 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.640 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.640 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.640 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.640 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.640 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.640 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:25.640 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.898 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.154 00:21:26.154 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.154 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.154 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.410 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.410 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.410 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.410 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.410 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.410 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.410 { 00:21:26.410 "cntlid": 107, 00:21:26.410 "qid": 0, 00:21:26.410 "state": "enabled", 00:21:26.410 "listen_address": { 00:21:26.410 "trtype": "TCP", 00:21:26.410 "adrfam": "IPv4", 00:21:26.410 "traddr": "10.0.0.2", 00:21:26.410 "trsvcid": "4420" 00:21:26.410 }, 00:21:26.410 "peer_address": { 00:21:26.410 "trtype": "TCP", 00:21:26.410 "adrfam": "IPv4", 00:21:26.410 "traddr": "10.0.0.1", 00:21:26.410 "trsvcid": "42780" 00:21:26.410 }, 00:21:26.410 "auth": { 00:21:26.410 "state": "completed", 00:21:26.410 "digest": "sha512", 00:21:26.410 "dhgroup": "ffdhe2048" 00:21:26.410 } 00:21:26.410 } 00:21:26.410 ]' 00:21:26.410 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.410 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.410 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.410 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:26.410 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.667 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.667 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.667 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.924 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:21:27.856 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.856 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.856 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.856 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.857 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.857 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.857 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.857 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.115 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.372 00:21:28.372 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.372 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.372 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.631 { 00:21:28.631 "cntlid": 109, 00:21:28.631 "qid": 0, 00:21:28.631 "state": "enabled", 00:21:28.631 "listen_address": { 00:21:28.631 "trtype": "TCP", 00:21:28.631 "adrfam": "IPv4", 00:21:28.631 "traddr": "10.0.0.2", 00:21:28.631 "trsvcid": "4420" 00:21:28.631 }, 00:21:28.631 "peer_address": { 00:21:28.631 "trtype": "TCP", 00:21:28.631 "adrfam": "IPv4", 00:21:28.631 "traddr": "10.0.0.1", 00:21:28.631 "trsvcid": "42802" 00:21:28.631 }, 00:21:28.631 "auth": { 00:21:28.631 "state": "completed", 00:21:28.631 "digest": "sha512", 00:21:28.631 "dhgroup": "ffdhe2048" 00:21:28.631 } 00:21:28.631 } 00:21:28.631 ]' 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.631 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.889 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:21:29.820 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.820 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.820 13:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.820 13:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.820 13:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.820 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.820 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.820 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.080 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.645 00:21:30.645 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.645 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.645 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.904 { 00:21:30.904 "cntlid": 111, 00:21:30.904 "qid": 0, 00:21:30.904 "state": "enabled", 00:21:30.904 "listen_address": { 00:21:30.904 "trtype": "TCP", 00:21:30.904 "adrfam": "IPv4", 00:21:30.904 "traddr": "10.0.0.2", 00:21:30.904 "trsvcid": "4420" 00:21:30.904 }, 00:21:30.904 "peer_address": { 00:21:30.904 "trtype": "TCP", 00:21:30.904 "adrfam": "IPv4", 00:21:30.904 "traddr": "10.0.0.1", 00:21:30.904 "trsvcid": "42832" 00:21:30.904 }, 00:21:30.904 "auth": { 00:21:30.904 "state": "completed", 00:21:30.904 "digest": "sha512", 00:21:30.904 "dhgroup": "ffdhe2048" 00:21:30.904 } 00:21:30.904 } 00:21:30.904 ]' 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.904 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.161 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:21:32.092 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.093 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.093 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.093 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.093 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.093 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.093 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.093 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.093 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.349 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.607 00:21:32.607 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.607 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.607 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.864 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.864 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.864 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.864 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.864 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.864 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.864 { 00:21:32.864 "cntlid": 113, 00:21:32.864 "qid": 0, 00:21:32.864 "state": "enabled", 00:21:32.864 "listen_address": { 00:21:32.864 "trtype": "TCP", 00:21:32.864 "adrfam": "IPv4", 00:21:32.864 "traddr": "10.0.0.2", 00:21:32.864 "trsvcid": "4420" 00:21:32.864 }, 00:21:32.864 "peer_address": { 00:21:32.864 "trtype": "TCP", 00:21:32.864 "adrfam": "IPv4", 00:21:32.864 "traddr": "10.0.0.1", 00:21:32.864 "trsvcid": "42852" 00:21:32.864 }, 00:21:32.864 "auth": { 00:21:32.864 "state": "completed", 00:21:32.864 "digest": "sha512", 00:21:32.864 "dhgroup": "ffdhe3072" 00:21:32.864 } 00:21:32.864 } 00:21:32.864 ]' 00:21:32.864 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.121 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.121 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.121 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:33.121 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.121 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.121 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.121 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.380 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:21:34.314 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.314 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.314 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.314 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.314 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.314 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.314 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.314 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.572 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.829 00:21:34.829 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.829 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.829 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.088 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.088 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.088 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.088 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.088 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.088 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.088 { 00:21:35.088 "cntlid": 115, 00:21:35.088 "qid": 0, 00:21:35.088 "state": "enabled", 00:21:35.088 "listen_address": { 00:21:35.088 "trtype": "TCP", 00:21:35.088 "adrfam": "IPv4", 00:21:35.088 "traddr": "10.0.0.2", 00:21:35.088 "trsvcid": "4420" 00:21:35.088 }, 00:21:35.088 "peer_address": { 00:21:35.088 "trtype": "TCP", 00:21:35.088 "adrfam": "IPv4", 00:21:35.088 "traddr": "10.0.0.1", 00:21:35.088 "trsvcid": "57312" 00:21:35.088 }, 00:21:35.088 "auth": { 00:21:35.088 "state": "completed", 00:21:35.088 "digest": "sha512", 00:21:35.088 "dhgroup": "ffdhe3072" 00:21:35.088 } 00:21:35.088 } 00:21:35.088 ]' 00:21:35.088 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.088 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.088 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.347 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:35.347 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.347 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.347 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.347 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.604 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:21:36.539 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.539 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.539 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.539 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.539 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.539 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.539 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.539 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.819 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:36.819 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.820 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.820 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:36.820 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:36.820 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.820 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.820 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.820 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.820 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.820 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.820 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.091 00:21:37.091 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.091 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.091 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.349 { 00:21:37.349 "cntlid": 117, 00:21:37.349 "qid": 0, 00:21:37.349 "state": "enabled", 00:21:37.349 "listen_address": { 00:21:37.349 "trtype": "TCP", 00:21:37.349 "adrfam": "IPv4", 00:21:37.349 "traddr": "10.0.0.2", 00:21:37.349 "trsvcid": "4420" 00:21:37.349 }, 00:21:37.349 "peer_address": { 00:21:37.349 "trtype": "TCP", 00:21:37.349 "adrfam": "IPv4", 00:21:37.349 "traddr": "10.0.0.1", 00:21:37.349 "trsvcid": "57338" 00:21:37.349 }, 00:21:37.349 "auth": { 00:21:37.349 "state": "completed", 00:21:37.349 "digest": "sha512", 00:21:37.349 "dhgroup": "ffdhe3072" 00:21:37.349 } 00:21:37.349 } 00:21:37.349 ]' 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.349 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.607 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:21:38.544 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.544 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.544 13:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.544 13:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.544 13:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.544 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.544 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.544 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.802 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:38.802 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.802 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.802 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:38.802 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:38.802 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.802 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:38.802 13:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.802 13:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.060 13:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.060 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.060 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.318 00:21:39.318 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.318 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.318 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.576 { 00:21:39.576 "cntlid": 119, 00:21:39.576 "qid": 0, 00:21:39.576 "state": "enabled", 00:21:39.576 "listen_address": { 00:21:39.576 "trtype": "TCP", 00:21:39.576 "adrfam": "IPv4", 00:21:39.576 "traddr": "10.0.0.2", 00:21:39.576 "trsvcid": "4420" 00:21:39.576 }, 00:21:39.576 "peer_address": { 00:21:39.576 "trtype": "TCP", 00:21:39.576 "adrfam": "IPv4", 00:21:39.576 "traddr": "10.0.0.1", 00:21:39.576 "trsvcid": "57364" 00:21:39.576 }, 00:21:39.576 "auth": { 00:21:39.576 "state": "completed", 00:21:39.576 "digest": "sha512", 00:21:39.576 "dhgroup": "ffdhe3072" 00:21:39.576 } 00:21:39.576 } 00:21:39.576 ]' 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.576 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.834 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:21:40.770 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.770 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.770 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.770 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.770 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.770 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.770 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.770 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.770 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.028 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.597 00:21:41.597 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.597 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.597 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.597 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.597 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.597 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.597 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.855 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.855 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.855 { 00:21:41.855 "cntlid": 121, 00:21:41.855 "qid": 0, 00:21:41.855 "state": "enabled", 00:21:41.855 "listen_address": { 00:21:41.855 "trtype": "TCP", 00:21:41.855 "adrfam": "IPv4", 00:21:41.855 "traddr": "10.0.0.2", 00:21:41.855 "trsvcid": "4420" 00:21:41.855 }, 00:21:41.855 "peer_address": { 00:21:41.855 "trtype": "TCP", 00:21:41.855 "adrfam": "IPv4", 00:21:41.855 "traddr": "10.0.0.1", 00:21:41.855 "trsvcid": "57392" 00:21:41.855 }, 00:21:41.855 "auth": { 00:21:41.855 "state": "completed", 00:21:41.855 "digest": "sha512", 00:21:41.855 "dhgroup": "ffdhe4096" 00:21:41.855 } 00:21:41.855 } 00:21:41.855 ]' 00:21:41.855 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.855 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.855 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.855 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.855 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.855 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.855 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.855 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.113 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:21:43.047 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.047 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.047 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.047 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.047 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.047 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.047 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:43.047 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.306 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.875 00:21:43.875 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.875 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.875 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.875 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.875 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.875 13:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.875 13:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.134 13:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.134 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.134 { 00:21:44.134 "cntlid": 123, 00:21:44.134 "qid": 0, 00:21:44.134 "state": "enabled", 00:21:44.134 "listen_address": { 00:21:44.134 "trtype": "TCP", 00:21:44.134 "adrfam": "IPv4", 00:21:44.134 "traddr": "10.0.0.2", 00:21:44.134 "trsvcid": "4420" 00:21:44.134 }, 00:21:44.134 "peer_address": { 00:21:44.134 "trtype": "TCP", 00:21:44.134 "adrfam": "IPv4", 00:21:44.134 "traddr": "10.0.0.1", 00:21:44.134 "trsvcid": "35860" 00:21:44.134 }, 00:21:44.134 "auth": { 00:21:44.134 "state": "completed", 00:21:44.134 "digest": "sha512", 00:21:44.134 "dhgroup": "ffdhe4096" 00:21:44.134 } 00:21:44.134 } 00:21:44.134 ]' 00:21:44.134 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.134 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.134 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.134 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:44.134 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.134 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.134 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.134 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.393 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:21:45.330 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.330 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.330 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.330 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.330 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.330 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.330 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.330 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.587 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.151 00:21:46.151 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.151 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.151 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.151 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.151 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.151 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.151 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.151 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.151 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.151 { 00:21:46.151 "cntlid": 125, 00:21:46.151 "qid": 0, 00:21:46.151 "state": "enabled", 00:21:46.151 "listen_address": { 00:21:46.151 "trtype": "TCP", 00:21:46.151 "adrfam": "IPv4", 00:21:46.151 "traddr": "10.0.0.2", 00:21:46.151 "trsvcid": "4420" 00:21:46.151 }, 00:21:46.151 "peer_address": { 00:21:46.151 "trtype": "TCP", 00:21:46.151 "adrfam": "IPv4", 00:21:46.151 "traddr": "10.0.0.1", 00:21:46.151 "trsvcid": "35896" 00:21:46.151 }, 00:21:46.151 "auth": { 00:21:46.151 "state": "completed", 00:21:46.151 "digest": "sha512", 00:21:46.151 "dhgroup": "ffdhe4096" 00:21:46.151 } 00:21:46.151 } 00:21:46.151 ]' 00:21:46.151 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.409 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.409 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.409 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:46.409 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.409 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.409 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.409 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.667 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:21:47.599 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.599 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.599 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.599 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.599 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.599 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.599 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:47.599 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.855 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.112 00:21:48.370 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.370 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.370 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.370 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.370 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.370 13:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.370 13:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.627 13:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.627 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.627 { 00:21:48.627 "cntlid": 127, 00:21:48.627 "qid": 0, 00:21:48.627 "state": "enabled", 00:21:48.627 "listen_address": { 00:21:48.627 "trtype": "TCP", 00:21:48.627 "adrfam": "IPv4", 00:21:48.627 "traddr": "10.0.0.2", 00:21:48.627 "trsvcid": "4420" 00:21:48.627 }, 00:21:48.627 "peer_address": { 00:21:48.627 "trtype": "TCP", 00:21:48.627 "adrfam": "IPv4", 00:21:48.627 "traddr": "10.0.0.1", 00:21:48.627 "trsvcid": "35924" 00:21:48.627 }, 00:21:48.627 "auth": { 00:21:48.627 "state": "completed", 00:21:48.627 "digest": "sha512", 00:21:48.627 "dhgroup": "ffdhe4096" 00:21:48.627 } 00:21:48.627 } 00:21:48.627 ]' 00:21:48.627 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.627 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.627 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.627 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:48.627 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.627 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.627 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.627 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.884 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:21:49.813 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.813 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.813 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.813 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.813 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.813 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.813 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.813 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.813 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.071 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.637 00:21:50.637 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.637 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.637 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.894 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.894 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.894 13:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.894 13:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.895 13:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.895 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.895 { 00:21:50.895 "cntlid": 129, 00:21:50.895 "qid": 0, 00:21:50.895 "state": "enabled", 00:21:50.895 "listen_address": { 00:21:50.895 "trtype": "TCP", 00:21:50.895 "adrfam": "IPv4", 00:21:50.895 "traddr": "10.0.0.2", 00:21:50.895 "trsvcid": "4420" 00:21:50.895 }, 00:21:50.895 "peer_address": { 00:21:50.895 "trtype": "TCP", 00:21:50.895 "adrfam": "IPv4", 00:21:50.895 "traddr": "10.0.0.1", 00:21:50.895 "trsvcid": "35946" 00:21:50.895 }, 00:21:50.895 "auth": { 00:21:50.895 "state": "completed", 00:21:50.895 "digest": "sha512", 00:21:50.895 "dhgroup": "ffdhe6144" 00:21:50.895 } 00:21:50.895 } 00:21:50.895 ]' 00:21:50.895 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.895 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.895 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.895 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.895 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.895 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.895 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.895 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.152 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:21:52.093 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.093 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.093 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.093 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.350 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.350 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.350 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:52.350 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.608 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.172 00:21:53.172 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.172 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.172 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.172 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.172 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.172 13:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.172 13:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.172 13:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.172 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.172 { 00:21:53.172 "cntlid": 131, 00:21:53.172 "qid": 0, 00:21:53.172 "state": "enabled", 00:21:53.172 "listen_address": { 00:21:53.172 "trtype": "TCP", 00:21:53.172 "adrfam": "IPv4", 00:21:53.172 "traddr": "10.0.0.2", 00:21:53.172 "trsvcid": "4420" 00:21:53.172 }, 00:21:53.172 "peer_address": { 00:21:53.172 "trtype": "TCP", 00:21:53.172 "adrfam": "IPv4", 00:21:53.172 "traddr": "10.0.0.1", 00:21:53.172 "trsvcid": "35972" 00:21:53.172 }, 00:21:53.172 "auth": { 00:21:53.172 "state": "completed", 00:21:53.172 "digest": "sha512", 00:21:53.172 "dhgroup": "ffdhe6144" 00:21:53.172 } 00:21:53.172 } 00:21:53.172 ]' 00:21:53.172 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.428 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.428 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.428 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:53.428 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.428 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.428 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.428 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.685 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:21:54.647 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.647 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.647 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.647 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.647 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.647 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.647 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:54.647 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.915 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.479 00:21:55.480 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.480 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.480 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.736 { 00:21:55.736 "cntlid": 133, 00:21:55.736 "qid": 0, 00:21:55.736 "state": "enabled", 00:21:55.736 "listen_address": { 00:21:55.736 "trtype": "TCP", 00:21:55.736 "adrfam": "IPv4", 00:21:55.736 "traddr": "10.0.0.2", 00:21:55.736 "trsvcid": "4420" 00:21:55.736 }, 00:21:55.736 "peer_address": { 00:21:55.736 "trtype": "TCP", 00:21:55.736 "adrfam": "IPv4", 00:21:55.736 "traddr": "10.0.0.1", 00:21:55.736 "trsvcid": "42646" 00:21:55.736 }, 00:21:55.736 "auth": { 00:21:55.736 "state": "completed", 00:21:55.736 "digest": "sha512", 00:21:55.736 "dhgroup": "ffdhe6144" 00:21:55.736 } 00:21:55.736 } 00:21:55.736 ]' 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.736 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.994 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:21:56.927 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.927 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.927 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.927 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.927 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.927 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.927 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.927 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:57.185 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:57.185 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.185 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.185 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:57.185 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:57.185 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.185 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:57.185 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.185 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.442 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.442 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.443 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.701 00:21:57.959 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.959 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.959 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.959 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.959 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.959 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.959 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.217 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.217 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.217 { 00:21:58.217 "cntlid": 135, 00:21:58.217 "qid": 0, 00:21:58.217 "state": "enabled", 00:21:58.217 "listen_address": { 00:21:58.217 "trtype": "TCP", 00:21:58.217 "adrfam": "IPv4", 00:21:58.217 "traddr": "10.0.0.2", 00:21:58.217 "trsvcid": "4420" 00:21:58.217 }, 00:21:58.217 "peer_address": { 00:21:58.217 "trtype": "TCP", 00:21:58.217 "adrfam": "IPv4", 00:21:58.217 "traddr": "10.0.0.1", 00:21:58.217 "trsvcid": "42672" 00:21:58.217 }, 00:21:58.217 "auth": { 00:21:58.217 "state": "completed", 00:21:58.217 "digest": "sha512", 00:21:58.217 "dhgroup": "ffdhe6144" 00:21:58.217 } 00:21:58.217 } 00:21:58.217 ]' 00:21:58.217 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.217 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.217 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.217 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:58.217 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.217 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.217 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.217 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.476 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:21:59.409 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.409 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.409 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.409 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.409 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.409 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.409 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.409 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.409 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.667 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.600 00:22:00.600 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.600 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.600 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.858 { 00:22:00.858 "cntlid": 137, 00:22:00.858 "qid": 0, 00:22:00.858 "state": "enabled", 00:22:00.858 "listen_address": { 00:22:00.858 "trtype": "TCP", 00:22:00.858 "adrfam": "IPv4", 00:22:00.858 "traddr": "10.0.0.2", 00:22:00.858 "trsvcid": "4420" 00:22:00.858 }, 00:22:00.858 "peer_address": { 00:22:00.858 "trtype": "TCP", 00:22:00.858 "adrfam": "IPv4", 00:22:00.858 "traddr": "10.0.0.1", 00:22:00.858 "trsvcid": "42706" 00:22:00.858 }, 00:22:00.858 "auth": { 00:22:00.858 "state": "completed", 00:22:00.858 "digest": "sha512", 00:22:00.858 "dhgroup": "ffdhe8192" 00:22:00.858 } 00:22:00.858 } 00:22:00.858 ]' 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.858 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.115 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:22:02.047 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.048 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.048 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.048 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.048 13:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.048 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.048 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:02.048 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.611 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.541 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.541 { 00:22:03.541 "cntlid": 139, 00:22:03.541 "qid": 0, 00:22:03.541 "state": "enabled", 00:22:03.541 "listen_address": { 00:22:03.541 "trtype": "TCP", 00:22:03.541 "adrfam": "IPv4", 00:22:03.541 "traddr": "10.0.0.2", 00:22:03.541 "trsvcid": "4420" 00:22:03.541 }, 00:22:03.541 "peer_address": { 00:22:03.541 "trtype": "TCP", 00:22:03.541 "adrfam": "IPv4", 00:22:03.541 "traddr": "10.0.0.1", 00:22:03.541 "trsvcid": "42744" 00:22:03.541 }, 00:22:03.541 "auth": { 00:22:03.541 "state": "completed", 00:22:03.541 "digest": "sha512", 00:22:03.541 "dhgroup": "ffdhe8192" 00:22:03.541 } 00:22:03.541 } 00:22:03.541 ]' 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.541 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.798 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.798 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.798 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.798 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.798 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.055 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:N2EzOWNhZjU4OGUwODVhNDU0MTI3OTY5MDFlNTA3NTNVAQO2: --dhchap-ctrl-secret DHHC-1:02:ZDE5MGFlZjMwNjZmYjI2NDk4NjNhYTFjZmM3MDA2ZDU4YjdjMGYzNzhkYzZmYTRkXpEWaw==: 00:22:04.985 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.985 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.985 13:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.985 13:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.985 13:57:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.985 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.985 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.985 13:57:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.243 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.172 00:22:06.172 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.172 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.172 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.429 { 00:22:06.429 "cntlid": 141, 00:22:06.429 "qid": 0, 00:22:06.429 "state": "enabled", 00:22:06.429 "listen_address": { 00:22:06.429 "trtype": "TCP", 00:22:06.429 "adrfam": "IPv4", 00:22:06.429 "traddr": "10.0.0.2", 00:22:06.429 "trsvcid": "4420" 00:22:06.429 }, 00:22:06.429 "peer_address": { 00:22:06.429 "trtype": "TCP", 00:22:06.429 "adrfam": "IPv4", 00:22:06.429 "traddr": "10.0.0.1", 00:22:06.429 "trsvcid": "47744" 00:22:06.429 }, 00:22:06.429 "auth": { 00:22:06.429 "state": "completed", 00:22:06.429 "digest": "sha512", 00:22:06.429 "dhgroup": "ffdhe8192" 00:22:06.429 } 00:22:06.429 } 00:22:06.429 ]' 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.429 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.686 13:57:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM4N2MyZTM3OTczOTNkMTczNzVmYTc1ZDE4MmI4MmJlNzVjNDAzMTYwYzBjZjc53loLpQ==: --dhchap-ctrl-secret DHHC-1:01:ZGU5NTc2ZTNhNTA2ZTQ0OTBlYTdhZTUwMzJhNmFkMTnfX+2g: 00:22:07.617 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.617 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.617 13:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.617 13:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.617 13:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.617 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.617 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.617 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.875 13:57:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.808 00:22:08.808 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.808 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.808 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.066 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.066 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.066 13:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.066 13:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.066 13:57:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.066 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.066 { 00:22:09.066 "cntlid": 143, 00:22:09.066 "qid": 0, 00:22:09.066 "state": "enabled", 00:22:09.066 "listen_address": { 00:22:09.066 "trtype": "TCP", 00:22:09.066 "adrfam": "IPv4", 00:22:09.066 "traddr": "10.0.0.2", 00:22:09.066 "trsvcid": "4420" 00:22:09.066 }, 00:22:09.066 "peer_address": { 00:22:09.066 "trtype": "TCP", 00:22:09.066 "adrfam": "IPv4", 00:22:09.066 "traddr": "10.0.0.1", 00:22:09.066 "trsvcid": "47776" 00:22:09.066 }, 00:22:09.066 "auth": { 00:22:09.066 "state": "completed", 00:22:09.066 "digest": "sha512", 00:22:09.066 "dhgroup": "ffdhe8192" 00:22:09.066 } 00:22:09.066 } 00:22:09.066 ]' 00:22:09.066 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.066 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.066 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.066 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.066 13:57:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.066 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.066 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.066 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.324 13:57:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:22:10.256 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.513 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.513 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.513 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.513 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.513 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:10.513 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:10.513 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:10.513 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.513 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.513 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.770 13:57:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.701 00:22:11.701 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.701 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.701 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.701 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.701 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.701 13:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.701 13:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.701 13:57:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.701 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.701 { 00:22:11.701 "cntlid": 145, 00:22:11.701 "qid": 0, 00:22:11.701 "state": "enabled", 00:22:11.701 "listen_address": { 00:22:11.701 "trtype": "TCP", 00:22:11.701 "adrfam": "IPv4", 00:22:11.701 "traddr": "10.0.0.2", 00:22:11.701 "trsvcid": "4420" 00:22:11.701 }, 00:22:11.701 "peer_address": { 00:22:11.701 "trtype": "TCP", 00:22:11.701 "adrfam": "IPv4", 00:22:11.701 "traddr": "10.0.0.1", 00:22:11.701 "trsvcid": "47804" 00:22:11.701 }, 00:22:11.701 "auth": { 00:22:11.701 "state": "completed", 00:22:11.701 "digest": "sha512", 00:22:11.701 "dhgroup": "ffdhe8192" 00:22:11.701 } 00:22:11.701 } 00:22:11.701 ]' 00:22:11.701 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.958 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.958 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.958 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.958 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.958 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.958 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.958 13:57:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.216 13:57:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZGRiZWMyNzJkZGM5NWQ5MWFjZjFiZjMwMTFjYjIwZDQ2ZDlkOWU0YzA0MzU4ZjMy8T1Lkg==: --dhchap-ctrl-secret DHHC-1:03:MjdiZDU0ZDM0MmM1Y2U5M2UyYTkzNWNmYzNkNDk3MzFmODg2ZjEwOGYzMDQzM2UxODU0Njk1ZmIxYjJiNWVhN0Dl5hA=: 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:13.179 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:14.111 request: 00:22:14.111 { 00:22:14.111 "name": "nvme0", 00:22:14.111 "trtype": "tcp", 00:22:14.111 "traddr": "10.0.0.2", 00:22:14.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:14.111 "adrfam": "ipv4", 00:22:14.111 "trsvcid": "4420", 00:22:14.111 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:14.111 "dhchap_key": "key2", 00:22:14.111 "method": "bdev_nvme_attach_controller", 00:22:14.111 "req_id": 1 00:22:14.111 } 00:22:14.111 Got JSON-RPC error response 00:22:14.111 response: 00:22:14.111 { 00:22:14.111 "code": -5, 00:22:14.111 "message": "Input/output error" 00:22:14.111 } 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:14.111 13:57:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:15.045 request: 00:22:15.045 { 00:22:15.045 "name": "nvme0", 00:22:15.045 "trtype": "tcp", 00:22:15.045 "traddr": "10.0.0.2", 00:22:15.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.045 "adrfam": "ipv4", 00:22:15.045 "trsvcid": "4420", 00:22:15.045 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:15.045 "dhchap_key": "key1", 00:22:15.045 "dhchap_ctrlr_key": "ckey2", 00:22:15.045 "method": "bdev_nvme_attach_controller", 00:22:15.045 "req_id": 1 00:22:15.045 } 00:22:15.045 Got JSON-RPC error response 00:22:15.045 response: 00:22:15.045 { 00:22:15.045 "code": -5, 00:22:15.045 "message": "Input/output error" 00:22:15.045 } 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.045 13:57:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.977 request: 00:22:15.977 { 00:22:15.977 "name": "nvme0", 00:22:15.977 "trtype": "tcp", 00:22:15.977 "traddr": "10.0.0.2", 00:22:15.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.977 "adrfam": "ipv4", 00:22:15.977 "trsvcid": "4420", 00:22:15.977 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:15.977 "dhchap_key": "key1", 00:22:15.977 "dhchap_ctrlr_key": "ckey1", 00:22:15.977 "method": "bdev_nvme_attach_controller", 00:22:15.977 "req_id": 1 00:22:15.977 } 00:22:15.977 Got JSON-RPC error response 00:22:15.977 response: 00:22:15.977 { 00:22:15.977 "code": -5, 00:22:15.977 "message": "Input/output error" 00:22:15.977 } 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1463668 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1463668 ']' 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1463668 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1463668 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1463668' 00:22:15.977 killing process with pid 1463668 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1463668 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1463668 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1486149 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1486149 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1486149 ']' 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:15.977 13:57:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1486149 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1486149 ']' 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:16.541 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.799 13:57:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.733 00:22:17.733 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.733 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.733 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.991 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.991 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.991 13:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.991 13:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.991 13:57:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.991 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.991 { 00:22:17.991 "cntlid": 1, 00:22:17.991 "qid": 0, 00:22:17.991 "state": "enabled", 00:22:17.991 "listen_address": { 00:22:17.991 "trtype": "TCP", 00:22:17.991 "adrfam": "IPv4", 00:22:17.991 "traddr": "10.0.0.2", 00:22:17.991 "trsvcid": "4420" 00:22:17.991 }, 00:22:17.991 "peer_address": { 00:22:17.991 "trtype": "TCP", 00:22:17.991 "adrfam": "IPv4", 00:22:17.991 "traddr": "10.0.0.1", 00:22:17.991 "trsvcid": "55000" 00:22:17.991 }, 00:22:17.991 "auth": { 00:22:17.991 "state": "completed", 00:22:17.991 "digest": "sha512", 00:22:17.991 "dhgroup": "ffdhe8192" 00:22:17.991 } 00:22:17.991 } 00:22:17.991 ]' 00:22:17.991 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.991 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.991 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.991 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.991 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.249 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.249 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.249 13:57:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.506 13:57:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWJmZDEwZGVhYzRjMGE2YTM0NTFlMTQwOTJmM2Y1ODI2OGM5NDMwNGE0OTUyOTlhY2FhMTk3NzkxN2M4ZDE4N5g0Tqw=: 00:22:19.440 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.440 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.440 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.440 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.440 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.440 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:19.440 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.440 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.440 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.440 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:19.440 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:19.698 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.698 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:19.698 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.698 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:19.698 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.698 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:19.698 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.698 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.698 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.698 request: 00:22:19.698 { 00:22:19.698 "name": "nvme0", 00:22:19.698 "trtype": "tcp", 00:22:19.698 "traddr": "10.0.0.2", 00:22:19.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:19.698 "adrfam": "ipv4", 00:22:19.698 "trsvcid": "4420", 00:22:19.698 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.698 "dhchap_key": "key3", 00:22:19.698 "method": "bdev_nvme_attach_controller", 00:22:19.698 "req_id": 1 00:22:19.698 } 00:22:19.698 Got JSON-RPC error response 00:22:19.698 response: 00:22:19.698 { 00:22:19.698 "code": -5, 00:22:19.698 "message": "Input/output error" 00:22:19.698 } 00:22:19.956 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:19.956 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:19.956 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:19.956 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:19.956 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:19.956 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:19.956 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:19.956 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:20.214 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.214 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:20.214 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.214 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:20.214 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.214 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:20.214 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.214 13:57:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.214 13:57:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.473 request: 00:22:20.473 { 00:22:20.473 "name": "nvme0", 00:22:20.473 "trtype": "tcp", 00:22:20.473 "traddr": "10.0.0.2", 00:22:20.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.473 "adrfam": "ipv4", 00:22:20.473 "trsvcid": "4420", 00:22:20.473 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.473 "dhchap_key": "key3", 00:22:20.473 "method": "bdev_nvme_attach_controller", 00:22:20.473 "req_id": 1 00:22:20.473 } 00:22:20.473 Got JSON-RPC error response 00:22:20.473 response: 00:22:20.473 { 00:22:20.473 "code": -5, 00:22:20.473 "message": "Input/output error" 00:22:20.473 } 00:22:20.473 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:20.473 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:20.473 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:20.473 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:20.473 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:20.473 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:20.473 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:20.473 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:20.473 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:20.473 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:20.731 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:20.989 request: 00:22:20.989 { 00:22:20.989 "name": "nvme0", 00:22:20.989 "trtype": "tcp", 00:22:20.989 "traddr": "10.0.0.2", 00:22:20.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.989 "adrfam": "ipv4", 00:22:20.989 "trsvcid": "4420", 00:22:20.989 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.989 "dhchap_key": "key0", 00:22:20.989 "dhchap_ctrlr_key": "key1", 00:22:20.989 "method": "bdev_nvme_attach_controller", 00:22:20.989 "req_id": 1 00:22:20.989 } 00:22:20.989 Got JSON-RPC error response 00:22:20.989 response: 00:22:20.989 { 00:22:20.990 "code": -5, 00:22:20.990 "message": "Input/output error" 00:22:20.990 } 00:22:20.990 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:20.990 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:20.990 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:20.990 13:57:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:20.990 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:20.990 13:57:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:21.248 00:22:21.248 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:21.248 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:21.248 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.506 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.506 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.506 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.763 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:21.763 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:21.763 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1463750 00:22:21.763 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1463750 ']' 00:22:21.763 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1463750 00:22:21.763 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:21.763 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:21.763 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1463750 00:22:21.763 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:21.763 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:21.763 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1463750' 00:22:21.764 killing process with pid 1463750 00:22:21.764 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1463750 00:22:21.764 13:57:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1463750 00:22:22.021 13:57:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:22.021 13:57:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:22.021 13:57:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:22.021 13:57:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:22.021 13:57:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:22.021 13:57:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:22.021 13:57:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:22.021 rmmod nvme_tcp 00:22:22.021 rmmod nvme_fabrics 00:22:22.021 rmmod nvme_keyring 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1486149 ']' 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1486149 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1486149 ']' 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1486149 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1486149 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1486149' 00:22:22.280 killing process with pid 1486149 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1486149 00:22:22.280 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1486149 00:22:22.539 13:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:22.539 13:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:22.539 13:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:22.539 13:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.539 13:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:22.539 13:58:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.539 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.539 13:58:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.440 13:58:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:24.440 13:58:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.SHI /tmp/spdk.key-sha256.2Ab /tmp/spdk.key-sha384.9YC /tmp/spdk.key-sha512.j8S /tmp/spdk.key-sha512.equ /tmp/spdk.key-sha384.VB4 /tmp/spdk.key-sha256.MmL '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:24.440 00:22:24.440 real 3m7.811s 00:22:24.440 user 7m17.472s 00:22:24.440 sys 0m24.563s 00:22:24.440 13:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:24.440 13:58:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.440 ************************************ 00:22:24.440 END TEST nvmf_auth_target 00:22:24.440 ************************************ 00:22:24.440 13:58:02 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:24.440 13:58:02 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:24.440 13:58:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:24.440 13:58:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:24.440 13:58:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:24.440 ************************************ 00:22:24.440 START TEST nvmf_bdevio_no_huge 00:22:24.440 ************************************ 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:24.440 * Looking for test storage... 00:22:24.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.440 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:24.698 13:58:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:26.595 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:26.595 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:26.595 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.595 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:26.596 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:26.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:22:26.596 00:22:26.596 --- 10.0.0.2 ping statistics --- 00:22:26.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.596 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:22:26.596 00:22:26.596 --- 10.0.0.1 ping statistics --- 00:22:26.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.596 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1488791 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1488791 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 1488791 ']' 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:26.596 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:26.853 [2024-07-14 13:58:04.618122] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:26.853 [2024-07-14 13:58:04.618213] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:26.853 [2024-07-14 13:58:04.691080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.853 [2024-07-14 13:58:04.782841] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.853 [2024-07-14 13:58:04.782916] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.853 [2024-07-14 13:58:04.782934] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.853 [2024-07-14 13:58:04.782946] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.853 [2024-07-14 13:58:04.782957] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.853 [2024-07-14 13:58:04.783047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:26.853 [2024-07-14 13:58:04.783101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:26.853 [2024-07-14 13:58:04.783153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:26.853 [2024-07-14 13:58:04.783157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.110 [2024-07-14 13:58:04.899932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.110 Malloc0 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.110 [2024-07-14 13:58:04.937781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:27.110 { 00:22:27.110 "params": { 00:22:27.110 "name": "Nvme$subsystem", 00:22:27.110 "trtype": "$TEST_TRANSPORT", 00:22:27.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.110 "adrfam": "ipv4", 00:22:27.110 "trsvcid": "$NVMF_PORT", 00:22:27.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.110 "hdgst": ${hdgst:-false}, 00:22:27.110 "ddgst": ${ddgst:-false} 00:22:27.110 }, 00:22:27.110 "method": "bdev_nvme_attach_controller" 00:22:27.110 } 00:22:27.110 EOF 00:22:27.110 )") 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:27.110 13:58:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:27.110 "params": { 00:22:27.110 "name": "Nvme1", 00:22:27.110 "trtype": "tcp", 00:22:27.110 "traddr": "10.0.0.2", 00:22:27.110 "adrfam": "ipv4", 00:22:27.110 "trsvcid": "4420", 00:22:27.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.110 "hdgst": false, 00:22:27.110 "ddgst": false 00:22:27.110 }, 00:22:27.110 "method": "bdev_nvme_attach_controller" 00:22:27.110 }' 00:22:27.110 [2024-07-14 13:58:04.981636] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:27.110 [2024-07-14 13:58:04.981723] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1488936 ] 00:22:27.110 [2024-07-14 13:58:05.045225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:27.367 [2024-07-14 13:58:05.129177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.367 [2024-07-14 13:58:05.129230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.367 [2024-07-14 13:58:05.129233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.367 I/O targets: 00:22:27.367 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:27.367 00:22:27.367 00:22:27.367 CUnit - A unit testing framework for C - Version 2.1-3 00:22:27.367 http://cunit.sourceforge.net/ 00:22:27.367 00:22:27.367 00:22:27.367 Suite: bdevio tests on: Nvme1n1 00:22:27.625 Test: blockdev write read block ...passed 00:22:27.625 Test: blockdev write zeroes read block ...passed 00:22:27.625 Test: blockdev write zeroes read no split ...passed 00:22:27.625 Test: blockdev write zeroes read split ...passed 00:22:27.625 Test: blockdev write zeroes read split partial ...passed 00:22:27.625 Test: blockdev reset ...[2024-07-14 13:58:05.524308] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:27.625 [2024-07-14 13:58:05.524413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad3a00 (9): Bad file descriptor 00:22:27.625 [2024-07-14 13:58:05.542257] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:27.625 passed 00:22:27.625 Test: blockdev write read 8 blocks ...passed 00:22:27.625 Test: blockdev write read size > 128k ...passed 00:22:27.625 Test: blockdev write read invalid size ...passed 00:22:27.625 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:27.625 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:27.625 Test: blockdev write read max offset ...passed 00:22:27.883 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:27.883 Test: blockdev writev readv 8 blocks ...passed 00:22:27.883 Test: blockdev writev readv 30 x 1block ...passed 00:22:27.883 Test: blockdev writev readv block ...passed 00:22:27.883 Test: blockdev writev readv size > 128k ...passed 00:22:27.883 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:27.883 Test: blockdev comparev and writev ...[2024-07-14 13:58:05.715054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.883 [2024-07-14 13:58:05.715089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.883 [2024-07-14 13:58:05.715126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.883 [2024-07-14 13:58:05.715155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:27.883 [2024-07-14 13:58:05.715543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.883 [2024-07-14 13:58:05.715571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:27.883 [2024-07-14 13:58:05.715607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.883 [2024-07-14 13:58:05.715634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:27.883 [2024-07-14 13:58:05.716027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.883 [2024-07-14 13:58:05.716054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:27.883 [2024-07-14 13:58:05.716089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.883 [2024-07-14 13:58:05.716116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:27.883 [2024-07-14 13:58:05.716494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.883 [2024-07-14 13:58:05.716520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:27.883 [2024-07-14 13:58:05.716555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:27.883 [2024-07-14 13:58:05.716583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:27.883 passed 00:22:27.883 Test: blockdev nvme passthru rw ...passed 00:22:27.883 Test: blockdev nvme passthru vendor specific ...[2024-07-14 13:58:05.798247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.883 [2024-07-14 13:58:05.798277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:27.883 [2024-07-14 13:58:05.798454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.883 [2024-07-14 13:58:05.798482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:27.883 [2024-07-14 13:58:05.798649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.883 [2024-07-14 13:58:05.798675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:27.883 [2024-07-14 13:58:05.798843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.883 [2024-07-14 13:58:05.798869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:27.883 passed 00:22:27.883 Test: blockdev nvme admin passthru ...passed 00:22:27.883 Test: blockdev copy ...passed 00:22:27.883 00:22:27.883 Run Summary: Type Total Ran Passed Failed Inactive 00:22:27.883 suites 1 1 n/a 0 0 00:22:27.883 tests 23 23 23 0 0 00:22:27.883 asserts 152 152 152 0 n/a 00:22:27.883 00:22:27.883 Elapsed time = 1.068 seconds 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.449 rmmod nvme_tcp 00:22:28.449 rmmod nvme_fabrics 00:22:28.449 rmmod nvme_keyring 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1488791 ']' 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1488791 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 1488791 ']' 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 1488791 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1488791 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1488791' 00:22:28.449 killing process with pid 1488791 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 1488791 00:22:28.449 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 1488791 00:22:28.707 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.707 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.707 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.707 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.707 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.707 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.707 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.707 13:58:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.240 13:58:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:31.240 00:22:31.240 real 0m6.326s 00:22:31.240 user 0m9.772s 00:22:31.240 sys 0m2.520s 00:22:31.240 13:58:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:31.240 13:58:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.240 ************************************ 00:22:31.240 END TEST nvmf_bdevio_no_huge 00:22:31.240 ************************************ 00:22:31.240 13:58:08 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:31.240 13:58:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:31.240 13:58:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:31.240 13:58:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.240 ************************************ 00:22:31.240 START TEST nvmf_tls 00:22:31.240 ************************************ 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:31.240 * Looking for test storage... 00:22:31.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:31.240 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:33.171 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:33.171 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:33.171 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:33.171 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:33.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:22:33.171 00:22:33.171 --- 10.0.0.2 ping statistics --- 00:22:33.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.171 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:22:33.171 00:22:33.171 --- 10.0.0.1 ping statistics --- 00:22:33.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.171 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:33.171 13:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1491007 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1491007 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1491007 ']' 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:33.172 13:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.172 [2024-07-14 13:58:10.941731] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:33.172 [2024-07-14 13:58:10.941826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.172 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.172 [2024-07-14 13:58:11.010357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.172 [2024-07-14 13:58:11.097925] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.172 [2024-07-14 13:58:11.097998] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.172 [2024-07-14 13:58:11.098027] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.172 [2024-07-14 13:58:11.098038] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.172 [2024-07-14 13:58:11.098048] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.172 [2024-07-14 13:58:11.098077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.431 13:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:33.431 13:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:33.431 13:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.431 13:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.431 13:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.431 13:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.431 13:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:33.431 13:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:33.691 true 00:22:33.691 13:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:33.691 13:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:33.951 13:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:33.951 13:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:33.951 13:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:34.209 13:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:34.209 13:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:34.466 13:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:34.466 13:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:34.466 13:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:34.724 13:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:34.724 13:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:34.981 13:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:34.981 13:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:34.981 13:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:34.981 13:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:35.240 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:35.240 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:35.240 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:35.499 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:35.499 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:35.757 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:35.757 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:35.757 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:36.014 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:36.014 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:36.272 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:36.272 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:36.272 13:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:36.272 13:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:36.272 13:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:36.272 13:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:36.272 13:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:36.272 13:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:36.272 13:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.2miYgOXVas 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.xiBQ6CFntu 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.2miYgOXVas 00:22:36.272 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.xiBQ6CFntu 00:22:36.273 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:36.531 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:36.791 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.2miYgOXVas 00:22:36.791 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2miYgOXVas 00:22:36.791 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:37.050 [2024-07-14 13:58:14.901891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.050 13:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:37.308 13:58:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:37.566 [2024-07-14 13:58:15.387231] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:37.566 [2024-07-14 13:58:15.387473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.566 13:58:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:37.824 malloc0 00:22:37.824 13:58:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:38.081 13:58:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2miYgOXVas 00:22:38.341 [2024-07-14 13:58:16.132035] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:38.341 13:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2miYgOXVas 00:22:38.341 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.329 Initializing NVMe Controllers 00:22:48.329 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:48.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:48.329 Initialization complete. Launching workers. 00:22:48.329 ======================================================== 00:22:48.329 Latency(us) 00:22:48.329 Device Information : IOPS MiB/s Average min max 00:22:48.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7598.10 29.68 8425.90 1296.00 9246.55 00:22:48.329 ======================================================== 00:22:48.329 Total : 7598.10 29.68 8425.90 1296.00 9246.55 00:22:48.329 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2miYgOXVas 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2miYgOXVas' 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1492775 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1492775 /var/tmp/bdevperf.sock 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1492775 ']' 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:48.329 13:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.329 [2024-07-14 13:58:26.301372] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:48.329 [2024-07-14 13:58:26.301454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492775 ] 00:22:48.588 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.588 [2024-07-14 13:58:26.360928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.588 [2024-07-14 13:58:26.453981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.588 13:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:48.588 13:58:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:48.588 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2miYgOXVas 00:22:48.846 [2024-07-14 13:58:26.779317] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.846 [2024-07-14 13:58:26.779424] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:49.106 TLSTESTn1 00:22:49.106 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:49.106 Running I/O for 10 seconds... 00:22:59.073 00:22:59.073 Latency(us) 00:22:59.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.073 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:59.073 Verification LBA range: start 0x0 length 0x2000 00:22:59.073 TLSTESTn1 : 10.02 3598.12 14.06 0.00 0.00 35518.33 6068.15 29321.29 00:22:59.073 =================================================================================================================== 00:22:59.073 Total : 3598.12 14.06 0.00 0.00 35518.33 6068.15 29321.29 00:22:59.073 0 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1492775 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1492775 ']' 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1492775 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1492775 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1492775' 00:22:59.073 killing process with pid 1492775 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1492775 00:22:59.073 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.073 00:22:59.073 Latency(us) 00:22:59.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.073 =================================================================================================================== 00:22:59.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.073 [2024-07-14 13:58:37.050619] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:59.073 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1492775 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xiBQ6CFntu 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xiBQ6CFntu 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xiBQ6CFntu 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xiBQ6CFntu' 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1494088 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1494088 /var/tmp/bdevperf.sock 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1494088 ']' 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:59.331 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.589 [2024-07-14 13:58:37.317676] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:59.589 [2024-07-14 13:58:37.317751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494088 ] 00:22:59.589 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.589 [2024-07-14 13:58:37.382886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.589 [2024-07-14 13:58:37.476816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.846 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:59.846 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:59.846 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xiBQ6CFntu 00:23:00.104 [2024-07-14 13:58:37.866963] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.104 [2024-07-14 13:58:37.867101] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:00.104 [2024-07-14 13:58:37.872615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:00.104 [2024-07-14 13:58:37.873141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd86ed0 (107): Transport endpoint is not connected 00:23:00.104 [2024-07-14 13:58:37.874116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd86ed0 (9): Bad file descriptor 00:23:00.104 [2024-07-14 13:58:37.875114] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.104 [2024-07-14 13:58:37.875139] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:00.104 [2024-07-14 13:58:37.875167] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.104 request: 00:23:00.104 { 00:23:00.104 "name": "TLSTEST", 00:23:00.104 "trtype": "tcp", 00:23:00.104 "traddr": "10.0.0.2", 00:23:00.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.104 "adrfam": "ipv4", 00:23:00.104 "trsvcid": "4420", 00:23:00.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.104 "psk": "/tmp/tmp.xiBQ6CFntu", 00:23:00.104 "method": "bdev_nvme_attach_controller", 00:23:00.104 "req_id": 1 00:23:00.104 } 00:23:00.104 Got JSON-RPC error response 00:23:00.104 response: 00:23:00.104 { 00:23:00.104 "code": -5, 00:23:00.104 "message": "Input/output error" 00:23:00.104 } 00:23:00.104 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1494088 00:23:00.104 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1494088 ']' 00:23:00.104 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1494088 00:23:00.104 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:00.104 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:00.104 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1494088 00:23:00.104 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:00.104 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:00.104 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1494088' 00:23:00.104 killing process with pid 1494088 00:23:00.104 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1494088 00:23:00.104 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.104 00:23:00.105 Latency(us) 00:23:00.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.105 =================================================================================================================== 00:23:00.105 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:00.105 [2024-07-14 13:58:37.925778] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:00.105 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1494088 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2miYgOXVas 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2miYgOXVas 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2miYgOXVas 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2miYgOXVas' 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1494220 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1494220 /var/tmp/bdevperf.sock 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1494220 ']' 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:00.362 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.362 [2024-07-14 13:58:38.187729] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:00.362 [2024-07-14 13:58:38.187803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494220 ] 00:23:00.362 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.362 [2024-07-14 13:58:38.245001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.362 [2024-07-14 13:58:38.325282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.619 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:00.619 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:00.619 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.2miYgOXVas 00:23:00.876 [2024-07-14 13:58:38.653246] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.876 [2024-07-14 13:58:38.653354] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:00.876 [2024-07-14 13:58:38.663014] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:00.876 [2024-07-14 13:58:38.663060] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:00.876 [2024-07-14 13:58:38.663099] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:00.876 [2024-07-14 13:58:38.663260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1346ed0 (107): Transport endpoint is not connected 00:23:00.876 [2024-07-14 13:58:38.664240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1346ed0 (9): Bad file descriptor 00:23:00.876 [2024-07-14 13:58:38.665244] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.876 [2024-07-14 13:58:38.665266] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:00.876 [2024-07-14 13:58:38.665291] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.876 request: 00:23:00.876 { 00:23:00.876 "name": "TLSTEST", 00:23:00.876 "trtype": "tcp", 00:23:00.876 "traddr": "10.0.0.2", 00:23:00.876 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:00.876 "adrfam": "ipv4", 00:23:00.876 "trsvcid": "4420", 00:23:00.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.877 "psk": "/tmp/tmp.2miYgOXVas", 00:23:00.877 "method": "bdev_nvme_attach_controller", 00:23:00.877 "req_id": 1 00:23:00.877 } 00:23:00.877 Got JSON-RPC error response 00:23:00.877 response: 00:23:00.877 { 00:23:00.877 "code": -5, 00:23:00.877 "message": "Input/output error" 00:23:00.877 } 00:23:00.877 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1494220 00:23:00.877 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1494220 ']' 00:23:00.877 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1494220 00:23:00.877 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:00.877 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:00.877 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1494220 00:23:00.877 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:00.877 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:00.877 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1494220' 00:23:00.877 killing process with pid 1494220 00:23:00.877 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1494220 00:23:00.877 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.877 00:23:00.877 Latency(us) 00:23:00.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.877 =================================================================================================================== 00:23:00.877 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:00.877 [2024-07-14 13:58:38.716561] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:00.877 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1494220 00:23:01.134 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:01.134 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:01.134 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:01.134 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:01.134 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:01.134 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2miYgOXVas 00:23:01.134 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2miYgOXVas 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2miYgOXVas 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2miYgOXVas' 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1494241 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1494241 /var/tmp/bdevperf.sock 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1494241 ']' 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:01.135 13:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.135 [2024-07-14 13:58:38.983794] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:01.135 [2024-07-14 13:58:38.983901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494241 ] 00:23:01.135 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.135 [2024-07-14 13:58:39.046504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.392 [2024-07-14 13:58:39.135981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.392 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:01.392 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:01.392 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2miYgOXVas 00:23:01.649 [2024-07-14 13:58:39.465826] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.649 [2024-07-14 13:58:39.465976] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:01.649 [2024-07-14 13:58:39.475144] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:01.649 [2024-07-14 13:58:39.475175] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:01.649 [2024-07-14 13:58:39.475232] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:01.649 [2024-07-14 13:58:39.475802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1884ed0 (107): Transport endpoint is not connected 00:23:01.649 [2024-07-14 13:58:39.476788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1884ed0 (9): Bad file descriptor 00:23:01.649 [2024-07-14 13:58:39.477787] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:01.649 [2024-07-14 13:58:39.477809] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:01.649 [2024-07-14 13:58:39.477836] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:01.649 request: 00:23:01.649 { 00:23:01.649 "name": "TLSTEST", 00:23:01.649 "trtype": "tcp", 00:23:01.649 "traddr": "10.0.0.2", 00:23:01.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.649 "adrfam": "ipv4", 00:23:01.649 "trsvcid": "4420", 00:23:01.649 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.649 "psk": "/tmp/tmp.2miYgOXVas", 00:23:01.649 "method": "bdev_nvme_attach_controller", 00:23:01.649 "req_id": 1 00:23:01.649 } 00:23:01.649 Got JSON-RPC error response 00:23:01.649 response: 00:23:01.649 { 00:23:01.649 "code": -5, 00:23:01.649 "message": "Input/output error" 00:23:01.649 } 00:23:01.649 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1494241 00:23:01.649 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1494241 ']' 00:23:01.649 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1494241 00:23:01.649 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:01.649 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:01.649 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1494241 00:23:01.649 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:01.649 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:01.649 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1494241' 00:23:01.649 killing process with pid 1494241 00:23:01.649 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1494241 00:23:01.649 Received shutdown signal, test time was about 10.000000 seconds 00:23:01.649 00:23:01.649 Latency(us) 00:23:01.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.649 =================================================================================================================== 00:23:01.649 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:01.649 [2024-07-14 13:58:39.525313] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:01.649 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1494241 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1494382 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1494382 /var/tmp/bdevperf.sock 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1494382 ']' 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:01.906 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.906 [2024-07-14 13:58:39.785450] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:01.906 [2024-07-14 13:58:39.785526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494382 ] 00:23:01.906 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.906 [2024-07-14 13:58:39.842521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.164 [2024-07-14 13:58:39.925851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.164 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:02.164 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:02.164 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:02.424 [2024-07-14 13:58:40.278575] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:02.424 [2024-07-14 13:58:40.280451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e285c0 (9): Bad file descriptor 00:23:02.424 [2024-07-14 13:58:40.281444] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:02.424 [2024-07-14 13:58:40.281467] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:02.424 [2024-07-14 13:58:40.281492] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:02.424 request: 00:23:02.424 { 00:23:02.424 "name": "TLSTEST", 00:23:02.424 "trtype": "tcp", 00:23:02.424 "traddr": "10.0.0.2", 00:23:02.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:02.424 "adrfam": "ipv4", 00:23:02.424 "trsvcid": "4420", 00:23:02.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.424 "method": "bdev_nvme_attach_controller", 00:23:02.424 "req_id": 1 00:23:02.424 } 00:23:02.424 Got JSON-RPC error response 00:23:02.424 response: 00:23:02.424 { 00:23:02.424 "code": -5, 00:23:02.424 "message": "Input/output error" 00:23:02.424 } 00:23:02.424 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1494382 00:23:02.424 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1494382 ']' 00:23:02.424 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1494382 00:23:02.424 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:02.424 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:02.424 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1494382 00:23:02.424 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:02.424 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:02.424 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1494382' 00:23:02.424 killing process with pid 1494382 00:23:02.424 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1494382 00:23:02.424 Received shutdown signal, test time was about 10.000000 seconds 00:23:02.424 00:23:02.424 Latency(us) 00:23:02.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.424 =================================================================================================================== 00:23:02.424 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:02.424 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1494382 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1491007 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1491007 ']' 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1491007 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1491007 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1491007' 00:23:02.713 killing process with pid 1491007 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1491007 00:23:02.713 [2024-07-14 13:58:40.576154] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:02.713 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1491007 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.LmJ62m0QS8 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.LmJ62m0QS8 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1494534 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1494534 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1494534 ']' 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:02.971 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.971 [2024-07-14 13:58:40.919931] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:02.971 [2024-07-14 13:58:40.920017] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.229 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.229 [2024-07-14 13:58:40.985951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.229 [2024-07-14 13:58:41.069585] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.229 [2024-07-14 13:58:41.069641] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.229 [2024-07-14 13:58:41.069669] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.229 [2024-07-14 13:58:41.069680] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.229 [2024-07-14 13:58:41.069690] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.229 [2024-07-14 13:58:41.069716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.229 13:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:03.229 13:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:03.229 13:58:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.229 13:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.229 13:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.229 13:58:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.229 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.LmJ62m0QS8 00:23:03.229 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LmJ62m0QS8 00:23:03.229 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:03.486 [2024-07-14 13:58:41.426063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.487 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:03.744 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:04.001 [2024-07-14 13:58:41.919427] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:04.001 [2024-07-14 13:58:41.919659] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.001 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:04.280 malloc0 00:23:04.280 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:04.537 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LmJ62m0QS8 00:23:04.795 [2024-07-14 13:58:42.705794] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LmJ62m0QS8 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LmJ62m0QS8' 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1494814 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1494814 /var/tmp/bdevperf.sock 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1494814 ']' 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:04.795 13:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.795 [2024-07-14 13:58:42.766587] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:04.795 [2024-07-14 13:58:42.766663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494814 ] 00:23:05.053 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.053 [2024-07-14 13:58:42.825249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.053 [2024-07-14 13:58:42.908434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.053 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:05.053 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:05.053 13:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LmJ62m0QS8 00:23:05.310 [2024-07-14 13:58:43.239029] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.310 [2024-07-14 13:58:43.239153] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:05.566 TLSTESTn1 00:23:05.566 13:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:05.566 Running I/O for 10 seconds... 00:23:15.526 00:23:15.526 Latency(us) 00:23:15.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.526 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:15.526 Verification LBA range: start 0x0 length 0x2000 00:23:15.526 TLSTESTn1 : 10.02 3595.13 14.04 0.00 0.00 35540.65 8592.50 40195.41 00:23:15.526 =================================================================================================================== 00:23:15.526 Total : 3595.13 14.04 0.00 0.00 35540.65 8592.50 40195.41 00:23:15.526 0 00:23:15.526 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.526 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1494814 00:23:15.526 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1494814 ']' 00:23:15.526 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1494814 00:23:15.526 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:15.526 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.784 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1494814 00:23:15.784 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:15.784 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:15.784 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1494814' 00:23:15.784 killing process with pid 1494814 00:23:15.784 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1494814 00:23:15.784 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.784 00:23:15.784 Latency(us) 00:23:15.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.784 =================================================================================================================== 00:23:15.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.784 [2024-07-14 13:58:53.535632] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:15.784 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1494814 00:23:15.784 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.LmJ62m0QS8 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LmJ62m0QS8 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LmJ62m0QS8 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LmJ62m0QS8 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LmJ62m0QS8' 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1496012 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1496012 /var/tmp/bdevperf.sock 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1496012 ']' 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.042 13:58:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.042 [2024-07-14 13:58:53.813528] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:16.042 [2024-07-14 13:58:53.813606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496012 ] 00:23:16.042 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.042 [2024-07-14 13:58:53.877012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.042 [2024-07-14 13:58:53.965371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.301 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.301 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:16.301 13:58:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LmJ62m0QS8 00:23:16.559 [2024-07-14 13:58:54.348087] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.559 [2024-07-14 13:58:54.348167] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:16.559 [2024-07-14 13:58:54.348202] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.LmJ62m0QS8 00:23:16.559 request: 00:23:16.559 { 00:23:16.559 "name": "TLSTEST", 00:23:16.559 "trtype": "tcp", 00:23:16.559 "traddr": "10.0.0.2", 00:23:16.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.559 "adrfam": "ipv4", 00:23:16.559 "trsvcid": "4420", 00:23:16.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.559 "psk": "/tmp/tmp.LmJ62m0QS8", 00:23:16.559 "method": "bdev_nvme_attach_controller", 00:23:16.559 "req_id": 1 00:23:16.559 } 00:23:16.559 Got JSON-RPC error response 00:23:16.559 response: 00:23:16.559 { 00:23:16.559 "code": -1, 00:23:16.559 "message": "Operation not permitted" 00:23:16.559 } 00:23:16.559 13:58:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1496012 00:23:16.559 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1496012 ']' 00:23:16.559 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1496012 00:23:16.559 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:16.559 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:16.559 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1496012 00:23:16.559 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:16.559 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:16.559 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1496012' 00:23:16.559 killing process with pid 1496012 00:23:16.559 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1496012 00:23:16.559 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.559 00:23:16.559 Latency(us) 00:23:16.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.559 =================================================================================================================== 00:23:16.559 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:16.559 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1496012 00:23:16.816 13:58:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:16.816 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:16.816 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:16.816 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:16.816 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:16.816 13:58:54 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1494534 00:23:16.816 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1494534 ']' 00:23:16.817 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1494534 00:23:16.817 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:16.817 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:16.817 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1494534 00:23:16.817 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:16.817 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:16.817 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1494534' 00:23:16.817 killing process with pid 1494534 00:23:16.817 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1494534 00:23:16.817 [2024-07-14 13:58:54.643117] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:16.817 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1494534 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1496155 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1496155 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1496155 ']' 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:17.074 13:58:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.074 [2024-07-14 13:58:54.947449] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:17.074 [2024-07-14 13:58:54.947530] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.074 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.074 [2024-07-14 13:58:55.018517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.332 [2024-07-14 13:58:55.109532] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.332 [2024-07-14 13:58:55.109596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.332 [2024-07-14 13:58:55.109612] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.332 [2024-07-14 13:58:55.109625] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.332 [2024-07-14 13:58:55.109637] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.332 [2024-07-14 13:58:55.109673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.LmJ62m0QS8 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.LmJ62m0QS8 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.LmJ62m0QS8 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LmJ62m0QS8 00:23:17.332 13:58:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:17.590 [2024-07-14 13:58:55.521554] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.590 13:58:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:17.848 13:58:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:18.105 [2024-07-14 13:58:56.006835] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.105 [2024-07-14 13:58:56.007092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.105 13:58:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:18.362 malloc0 00:23:18.362 13:58:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:18.619 13:58:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LmJ62m0QS8 00:23:18.875 [2024-07-14 13:58:56.741262] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:18.875 [2024-07-14 13:58:56.741300] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:18.876 [2024-07-14 13:58:56.741338] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:18.876 request: 00:23:18.876 { 00:23:18.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.876 "host": "nqn.2016-06.io.spdk:host1", 00:23:18.876 "psk": "/tmp/tmp.LmJ62m0QS8", 00:23:18.876 "method": "nvmf_subsystem_add_host", 00:23:18.876 "req_id": 1 00:23:18.876 } 00:23:18.876 Got JSON-RPC error response 00:23:18.876 response: 00:23:18.876 { 00:23:18.876 "code": -32603, 00:23:18.876 "message": "Internal error" 00:23:18.876 } 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1496155 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1496155 ']' 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1496155 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1496155 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1496155' 00:23:18.876 killing process with pid 1496155 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1496155 00:23:18.876 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1496155 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.LmJ62m0QS8 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1496446 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1496446 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1496446 ']' 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:19.133 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.133 [2024-07-14 13:58:57.098745] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:19.133 [2024-07-14 13:58:57.098830] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.390 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.390 [2024-07-14 13:58:57.167337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.390 [2024-07-14 13:58:57.260221] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.390 [2024-07-14 13:58:57.260286] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.390 [2024-07-14 13:58:57.260303] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.390 [2024-07-14 13:58:57.260316] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.390 [2024-07-14 13:58:57.260327] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.390 [2024-07-14 13:58:57.260367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.390 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:19.390 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:19.390 13:58:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.390 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.390 13:58:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.647 13:58:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.647 13:58:57 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.LmJ62m0QS8 00:23:19.647 13:58:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LmJ62m0QS8 00:23:19.647 13:58:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:19.647 [2024-07-14 13:58:57.612446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.904 13:58:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:19.904 13:58:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:20.162 [2024-07-14 13:58:58.101811] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:20.162 [2024-07-14 13:58:58.102071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.162 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:20.419 malloc0 00:23:20.420 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:20.678 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LmJ62m0QS8 00:23:20.936 [2024-07-14 13:58:58.835640] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:20.936 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1496729 00:23:20.936 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.936 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.936 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1496729 /var/tmp/bdevperf.sock 00:23:20.936 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1496729 ']' 00:23:20.936 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.936 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:20.936 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.936 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:20.936 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.936 [2024-07-14 13:58:58.896829] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:20.936 [2024-07-14 13:58:58.896943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496729 ] 00:23:21.195 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.195 [2024-07-14 13:58:58.956961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.195 [2024-07-14 13:58:59.043110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.195 13:58:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:21.195 13:58:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:21.195 13:58:59 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LmJ62m0QS8 00:23:21.452 [2024-07-14 13:58:59.368578] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.452 [2024-07-14 13:58:59.368708] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:21.710 TLSTESTn1 00:23:21.710 13:58:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:21.967 13:58:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:21.967 "subsystems": [ 00:23:21.967 { 00:23:21.967 "subsystem": "keyring", 00:23:21.967 "config": [] 00:23:21.967 }, 00:23:21.967 { 00:23:21.967 "subsystem": "iobuf", 00:23:21.967 "config": [ 00:23:21.967 { 00:23:21.967 "method": "iobuf_set_options", 00:23:21.967 "params": { 00:23:21.967 "small_pool_count": 8192, 00:23:21.967 "large_pool_count": 1024, 00:23:21.967 "small_bufsize": 8192, 00:23:21.967 "large_bufsize": 135168 00:23:21.967 } 00:23:21.967 } 00:23:21.967 ] 00:23:21.967 }, 00:23:21.967 { 00:23:21.967 "subsystem": "sock", 00:23:21.967 "config": [ 00:23:21.967 { 00:23:21.967 "method": "sock_set_default_impl", 00:23:21.967 "params": { 00:23:21.967 "impl_name": "posix" 00:23:21.967 } 00:23:21.967 }, 00:23:21.968 { 00:23:21.968 "method": "sock_impl_set_options", 00:23:21.968 "params": { 00:23:21.968 "impl_name": "ssl", 00:23:21.968 "recv_buf_size": 4096, 00:23:21.968 "send_buf_size": 4096, 00:23:21.968 "enable_recv_pipe": true, 00:23:21.968 "enable_quickack": false, 00:23:21.968 "enable_placement_id": 0, 00:23:21.968 "enable_zerocopy_send_server": true, 00:23:21.968 "enable_zerocopy_send_client": false, 00:23:21.968 "zerocopy_threshold": 0, 00:23:21.968 "tls_version": 0, 00:23:21.968 "enable_ktls": false 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "sock_impl_set_options", 00:23:21.968 "params": { 00:23:21.968 "impl_name": "posix", 00:23:21.968 "recv_buf_size": 2097152, 00:23:21.968 "send_buf_size": 2097152, 00:23:21.968 "enable_recv_pipe": true, 00:23:21.968 "enable_quickack": false, 00:23:21.968 "enable_placement_id": 0, 00:23:21.968 "enable_zerocopy_send_server": true, 00:23:21.968 "enable_zerocopy_send_client": false, 00:23:21.968 "zerocopy_threshold": 0, 00:23:21.968 "tls_version": 0, 00:23:21.968 "enable_ktls": false 00:23:21.968 } 00:23:21.968 } 00:23:21.968 ] 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "subsystem": "vmd", 00:23:21.968 "config": [] 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "subsystem": "accel", 00:23:21.968 "config": [ 00:23:21.968 { 00:23:21.968 "method": "accel_set_options", 00:23:21.968 "params": { 00:23:21.968 "small_cache_size": 128, 00:23:21.968 "large_cache_size": 16, 00:23:21.968 "task_count": 2048, 00:23:21.968 "sequence_count": 2048, 00:23:21.968 "buf_count": 2048 00:23:21.968 } 00:23:21.968 } 00:23:21.968 ] 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "subsystem": "bdev", 00:23:21.968 "config": [ 00:23:21.968 { 00:23:21.968 "method": "bdev_set_options", 00:23:21.968 "params": { 00:23:21.968 "bdev_io_pool_size": 65535, 00:23:21.968 "bdev_io_cache_size": 256, 00:23:21.968 "bdev_auto_examine": true, 00:23:21.968 "iobuf_small_cache_size": 128, 00:23:21.968 "iobuf_large_cache_size": 16 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "bdev_raid_set_options", 00:23:21.968 "params": { 00:23:21.968 "process_window_size_kb": 1024 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "bdev_iscsi_set_options", 00:23:21.968 "params": { 00:23:21.968 "timeout_sec": 30 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "bdev_nvme_set_options", 00:23:21.968 "params": { 00:23:21.968 "action_on_timeout": "none", 00:23:21.968 "timeout_us": 0, 00:23:21.968 "timeout_admin_us": 0, 00:23:21.968 "keep_alive_timeout_ms": 10000, 00:23:21.968 "arbitration_burst": 0, 00:23:21.968 "low_priority_weight": 0, 00:23:21.968 "medium_priority_weight": 0, 00:23:21.968 "high_priority_weight": 0, 00:23:21.968 "nvme_adminq_poll_period_us": 10000, 00:23:21.968 "nvme_ioq_poll_period_us": 0, 00:23:21.968 "io_queue_requests": 0, 00:23:21.968 "delay_cmd_submit": true, 00:23:21.968 "transport_retry_count": 4, 00:23:21.968 "bdev_retry_count": 3, 00:23:21.968 "transport_ack_timeout": 0, 00:23:21.968 "ctrlr_loss_timeout_sec": 0, 00:23:21.968 "reconnect_delay_sec": 0, 00:23:21.968 "fast_io_fail_timeout_sec": 0, 00:23:21.968 "disable_auto_failback": false, 00:23:21.968 "generate_uuids": false, 00:23:21.968 "transport_tos": 0, 00:23:21.968 "nvme_error_stat": false, 00:23:21.968 "rdma_srq_size": 0, 00:23:21.968 "io_path_stat": false, 00:23:21.968 "allow_accel_sequence": false, 00:23:21.968 "rdma_max_cq_size": 0, 00:23:21.968 "rdma_cm_event_timeout_ms": 0, 00:23:21.968 "dhchap_digests": [ 00:23:21.968 "sha256", 00:23:21.968 "sha384", 00:23:21.968 "sha512" 00:23:21.968 ], 00:23:21.968 "dhchap_dhgroups": [ 00:23:21.968 "null", 00:23:21.968 "ffdhe2048", 00:23:21.968 "ffdhe3072", 00:23:21.968 "ffdhe4096", 00:23:21.968 "ffdhe6144", 00:23:21.968 "ffdhe8192" 00:23:21.968 ] 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "bdev_nvme_set_hotplug", 00:23:21.968 "params": { 00:23:21.968 "period_us": 100000, 00:23:21.968 "enable": false 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "bdev_malloc_create", 00:23:21.968 "params": { 00:23:21.968 "name": "malloc0", 00:23:21.968 "num_blocks": 8192, 00:23:21.968 "block_size": 4096, 00:23:21.968 "physical_block_size": 4096, 00:23:21.968 "uuid": "b080d70b-4809-42e6-85d8-2e42a171f2f6", 00:23:21.968 "optimal_io_boundary": 0 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "bdev_wait_for_examine" 00:23:21.968 } 00:23:21.968 ] 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "subsystem": "nbd", 00:23:21.968 "config": [] 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "subsystem": "scheduler", 00:23:21.968 "config": [ 00:23:21.968 { 00:23:21.968 "method": "framework_set_scheduler", 00:23:21.968 "params": { 00:23:21.968 "name": "static" 00:23:21.968 } 00:23:21.968 } 00:23:21.968 ] 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "subsystem": "nvmf", 00:23:21.968 "config": [ 00:23:21.968 { 00:23:21.968 "method": "nvmf_set_config", 00:23:21.968 "params": { 00:23:21.968 "discovery_filter": "match_any", 00:23:21.968 "admin_cmd_passthru": { 00:23:21.968 "identify_ctrlr": false 00:23:21.968 } 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "nvmf_set_max_subsystems", 00:23:21.968 "params": { 00:23:21.968 "max_subsystems": 1024 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "nvmf_set_crdt", 00:23:21.968 "params": { 00:23:21.968 "crdt1": 0, 00:23:21.968 "crdt2": 0, 00:23:21.968 "crdt3": 0 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "nvmf_create_transport", 00:23:21.968 "params": { 00:23:21.968 "trtype": "TCP", 00:23:21.968 "max_queue_depth": 128, 00:23:21.968 "max_io_qpairs_per_ctrlr": 127, 00:23:21.968 "in_capsule_data_size": 4096, 00:23:21.968 "max_io_size": 131072, 00:23:21.968 "io_unit_size": 131072, 00:23:21.968 "max_aq_depth": 128, 00:23:21.968 "num_shared_buffers": 511, 00:23:21.968 "buf_cache_size": 4294967295, 00:23:21.968 "dif_insert_or_strip": false, 00:23:21.968 "zcopy": false, 00:23:21.968 "c2h_success": false, 00:23:21.968 "sock_priority": 0, 00:23:21.968 "abort_timeout_sec": 1, 00:23:21.968 "ack_timeout": 0, 00:23:21.968 "data_wr_pool_size": 0 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "nvmf_create_subsystem", 00:23:21.968 "params": { 00:23:21.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.968 "allow_any_host": false, 00:23:21.968 "serial_number": "SPDK00000000000001", 00:23:21.968 "model_number": "SPDK bdev Controller", 00:23:21.968 "max_namespaces": 10, 00:23:21.968 "min_cntlid": 1, 00:23:21.968 "max_cntlid": 65519, 00:23:21.968 "ana_reporting": false 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "nvmf_subsystem_add_host", 00:23:21.968 "params": { 00:23:21.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.968 "host": "nqn.2016-06.io.spdk:host1", 00:23:21.968 "psk": "/tmp/tmp.LmJ62m0QS8" 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "nvmf_subsystem_add_ns", 00:23:21.968 "params": { 00:23:21.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.968 "namespace": { 00:23:21.968 "nsid": 1, 00:23:21.968 "bdev_name": "malloc0", 00:23:21.968 "nguid": "B080D70B480942E685D82E42A171F2F6", 00:23:21.968 "uuid": "b080d70b-4809-42e6-85d8-2e42a171f2f6", 00:23:21.968 "no_auto_visible": false 00:23:21.968 } 00:23:21.968 } 00:23:21.968 }, 00:23:21.968 { 00:23:21.968 "method": "nvmf_subsystem_add_listener", 00:23:21.968 "params": { 00:23:21.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.968 "listen_address": { 00:23:21.968 "trtype": "TCP", 00:23:21.968 "adrfam": "IPv4", 00:23:21.968 "traddr": "10.0.0.2", 00:23:21.968 "trsvcid": "4420" 00:23:21.968 }, 00:23:21.968 "secure_channel": true 00:23:21.968 } 00:23:21.968 } 00:23:21.968 ] 00:23:21.969 } 00:23:21.969 ] 00:23:21.969 }' 00:23:21.969 13:58:59 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:22.226 13:59:00 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:22.226 "subsystems": [ 00:23:22.226 { 00:23:22.226 "subsystem": "keyring", 00:23:22.226 "config": [] 00:23:22.226 }, 00:23:22.226 { 00:23:22.226 "subsystem": "iobuf", 00:23:22.226 "config": [ 00:23:22.226 { 00:23:22.226 "method": "iobuf_set_options", 00:23:22.226 "params": { 00:23:22.226 "small_pool_count": 8192, 00:23:22.226 "large_pool_count": 1024, 00:23:22.226 "small_bufsize": 8192, 00:23:22.226 "large_bufsize": 135168 00:23:22.226 } 00:23:22.226 } 00:23:22.226 ] 00:23:22.226 }, 00:23:22.226 { 00:23:22.226 "subsystem": "sock", 00:23:22.226 "config": [ 00:23:22.226 { 00:23:22.226 "method": "sock_set_default_impl", 00:23:22.226 "params": { 00:23:22.226 "impl_name": "posix" 00:23:22.226 } 00:23:22.226 }, 00:23:22.226 { 00:23:22.226 "method": "sock_impl_set_options", 00:23:22.226 "params": { 00:23:22.226 "impl_name": "ssl", 00:23:22.226 "recv_buf_size": 4096, 00:23:22.226 "send_buf_size": 4096, 00:23:22.226 "enable_recv_pipe": true, 00:23:22.226 "enable_quickack": false, 00:23:22.226 "enable_placement_id": 0, 00:23:22.226 "enable_zerocopy_send_server": true, 00:23:22.226 "enable_zerocopy_send_client": false, 00:23:22.226 "zerocopy_threshold": 0, 00:23:22.226 "tls_version": 0, 00:23:22.226 "enable_ktls": false 00:23:22.226 } 00:23:22.226 }, 00:23:22.226 { 00:23:22.226 "method": "sock_impl_set_options", 00:23:22.226 "params": { 00:23:22.226 "impl_name": "posix", 00:23:22.226 "recv_buf_size": 2097152, 00:23:22.226 "send_buf_size": 2097152, 00:23:22.226 "enable_recv_pipe": true, 00:23:22.226 "enable_quickack": false, 00:23:22.226 "enable_placement_id": 0, 00:23:22.226 "enable_zerocopy_send_server": true, 00:23:22.226 "enable_zerocopy_send_client": false, 00:23:22.226 "zerocopy_threshold": 0, 00:23:22.226 "tls_version": 0, 00:23:22.226 "enable_ktls": false 00:23:22.226 } 00:23:22.226 } 00:23:22.226 ] 00:23:22.226 }, 00:23:22.226 { 00:23:22.226 "subsystem": "vmd", 00:23:22.226 "config": [] 00:23:22.226 }, 00:23:22.226 { 00:23:22.226 "subsystem": "accel", 00:23:22.226 "config": [ 00:23:22.226 { 00:23:22.226 "method": "accel_set_options", 00:23:22.226 "params": { 00:23:22.226 "small_cache_size": 128, 00:23:22.226 "large_cache_size": 16, 00:23:22.226 "task_count": 2048, 00:23:22.226 "sequence_count": 2048, 00:23:22.226 "buf_count": 2048 00:23:22.226 } 00:23:22.226 } 00:23:22.226 ] 00:23:22.226 }, 00:23:22.226 { 00:23:22.226 "subsystem": "bdev", 00:23:22.227 "config": [ 00:23:22.227 { 00:23:22.227 "method": "bdev_set_options", 00:23:22.227 "params": { 00:23:22.227 "bdev_io_pool_size": 65535, 00:23:22.227 "bdev_io_cache_size": 256, 00:23:22.227 "bdev_auto_examine": true, 00:23:22.227 "iobuf_small_cache_size": 128, 00:23:22.227 "iobuf_large_cache_size": 16 00:23:22.227 } 00:23:22.227 }, 00:23:22.227 { 00:23:22.227 "method": "bdev_raid_set_options", 00:23:22.227 "params": { 00:23:22.227 "process_window_size_kb": 1024 00:23:22.227 } 00:23:22.227 }, 00:23:22.227 { 00:23:22.227 "method": "bdev_iscsi_set_options", 00:23:22.227 "params": { 00:23:22.227 "timeout_sec": 30 00:23:22.227 } 00:23:22.227 }, 00:23:22.227 { 00:23:22.227 "method": "bdev_nvme_set_options", 00:23:22.227 "params": { 00:23:22.227 "action_on_timeout": "none", 00:23:22.227 "timeout_us": 0, 00:23:22.227 "timeout_admin_us": 0, 00:23:22.227 "keep_alive_timeout_ms": 10000, 00:23:22.227 "arbitration_burst": 0, 00:23:22.227 "low_priority_weight": 0, 00:23:22.227 "medium_priority_weight": 0, 00:23:22.227 "high_priority_weight": 0, 00:23:22.227 "nvme_adminq_poll_period_us": 10000, 00:23:22.227 "nvme_ioq_poll_period_us": 0, 00:23:22.227 "io_queue_requests": 512, 00:23:22.227 "delay_cmd_submit": true, 00:23:22.227 "transport_retry_count": 4, 00:23:22.227 "bdev_retry_count": 3, 00:23:22.227 "transport_ack_timeout": 0, 00:23:22.227 "ctrlr_loss_timeout_sec": 0, 00:23:22.227 "reconnect_delay_sec": 0, 00:23:22.227 "fast_io_fail_timeout_sec": 0, 00:23:22.227 "disable_auto_failback": false, 00:23:22.227 "generate_uuids": false, 00:23:22.227 "transport_tos": 0, 00:23:22.227 "nvme_error_stat": false, 00:23:22.227 "rdma_srq_size": 0, 00:23:22.227 "io_path_stat": false, 00:23:22.227 "allow_accel_sequence": false, 00:23:22.227 "rdma_max_cq_size": 0, 00:23:22.227 "rdma_cm_event_timeout_ms": 0, 00:23:22.227 "dhchap_digests": [ 00:23:22.227 "sha256", 00:23:22.227 "sha384", 00:23:22.227 "sha512" 00:23:22.227 ], 00:23:22.227 "dhchap_dhgroups": [ 00:23:22.227 "null", 00:23:22.227 "ffdhe2048", 00:23:22.227 "ffdhe3072", 00:23:22.227 "ffdhe4096", 00:23:22.227 "ffdhe6144", 00:23:22.227 "ffdhe8192" 00:23:22.227 ] 00:23:22.227 } 00:23:22.227 }, 00:23:22.227 { 00:23:22.227 "method": "bdev_nvme_attach_controller", 00:23:22.227 "params": { 00:23:22.227 "name": "TLSTEST", 00:23:22.227 "trtype": "TCP", 00:23:22.227 "adrfam": "IPv4", 00:23:22.227 "traddr": "10.0.0.2", 00:23:22.227 "trsvcid": "4420", 00:23:22.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.227 "prchk_reftag": false, 00:23:22.227 "prchk_guard": false, 00:23:22.227 "ctrlr_loss_timeout_sec": 0, 00:23:22.227 "reconnect_delay_sec": 0, 00:23:22.227 "fast_io_fail_timeout_sec": 0, 00:23:22.227 "psk": "/tmp/tmp.LmJ62m0QS8", 00:23:22.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:22.227 "hdgst": false, 00:23:22.227 "ddgst": false 00:23:22.227 } 00:23:22.227 }, 00:23:22.227 { 00:23:22.227 "method": "bdev_nvme_set_hotplug", 00:23:22.227 "params": { 00:23:22.227 "period_us": 100000, 00:23:22.227 "enable": false 00:23:22.227 } 00:23:22.227 }, 00:23:22.227 { 00:23:22.227 "method": "bdev_wait_for_examine" 00:23:22.227 } 00:23:22.227 ] 00:23:22.227 }, 00:23:22.227 { 00:23:22.227 "subsystem": "nbd", 00:23:22.227 "config": [] 00:23:22.227 } 00:23:22.227 ] 00:23:22.227 }' 00:23:22.227 13:59:00 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1496729 00:23:22.227 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1496729 ']' 00:23:22.227 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1496729 00:23:22.227 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:22.227 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:22.227 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1496729 00:23:22.227 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:22.227 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:22.227 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1496729' 00:23:22.227 killing process with pid 1496729 00:23:22.227 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1496729 00:23:22.227 Received shutdown signal, test time was about 10.000000 seconds 00:23:22.227 00:23:22.227 Latency(us) 00:23:22.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.227 =================================================================================================================== 00:23:22.227 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:22.227 [2024-07-14 13:59:00.113525] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:22.227 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1496729 00:23:22.485 13:59:00 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1496446 00:23:22.485 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1496446 ']' 00:23:22.485 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1496446 00:23:22.485 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:22.485 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:22.485 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1496446 00:23:22.485 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:22.485 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:22.485 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1496446' 00:23:22.485 killing process with pid 1496446 00:23:22.485 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1496446 00:23:22.485 [2024-07-14 13:59:00.360328] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:22.485 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1496446 00:23:22.743 13:59:00 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:22.743 13:59:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:22.743 13:59:00 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:22.743 "subsystems": [ 00:23:22.743 { 00:23:22.743 "subsystem": "keyring", 00:23:22.743 "config": [] 00:23:22.743 }, 00:23:22.743 { 00:23:22.743 "subsystem": "iobuf", 00:23:22.743 "config": [ 00:23:22.743 { 00:23:22.743 "method": "iobuf_set_options", 00:23:22.743 "params": { 00:23:22.743 "small_pool_count": 8192, 00:23:22.743 "large_pool_count": 1024, 00:23:22.743 "small_bufsize": 8192, 00:23:22.743 "large_bufsize": 135168 00:23:22.743 } 00:23:22.743 } 00:23:22.743 ] 00:23:22.743 }, 00:23:22.743 { 00:23:22.743 "subsystem": "sock", 00:23:22.743 "config": [ 00:23:22.743 { 00:23:22.743 "method": "sock_set_default_impl", 00:23:22.743 "params": { 00:23:22.743 "impl_name": "posix" 00:23:22.743 } 00:23:22.743 }, 00:23:22.743 { 00:23:22.743 "method": "sock_impl_set_options", 00:23:22.743 "params": { 00:23:22.743 "impl_name": "ssl", 00:23:22.743 "recv_buf_size": 4096, 00:23:22.743 "send_buf_size": 4096, 00:23:22.743 "enable_recv_pipe": true, 00:23:22.743 "enable_quickack": false, 00:23:22.743 "enable_placement_id": 0, 00:23:22.743 "enable_zerocopy_send_server": true, 00:23:22.743 "enable_zerocopy_send_client": false, 00:23:22.743 "zerocopy_threshold": 0, 00:23:22.743 "tls_version": 0, 00:23:22.743 "enable_ktls": false 00:23:22.743 } 00:23:22.743 }, 00:23:22.743 { 00:23:22.743 "method": "sock_impl_set_options", 00:23:22.743 "params": { 00:23:22.743 "impl_name": "posix", 00:23:22.743 "recv_buf_size": 2097152, 00:23:22.743 "send_buf_size": 2097152, 00:23:22.743 "enable_recv_pipe": true, 00:23:22.743 "enable_quickack": false, 00:23:22.743 "enable_placement_id": 0, 00:23:22.743 "enable_zerocopy_send_server": true, 00:23:22.743 "enable_zerocopy_send_client": false, 00:23:22.743 "zerocopy_threshold": 0, 00:23:22.743 "tls_version": 0, 00:23:22.743 "enable_ktls": false 00:23:22.743 } 00:23:22.743 } 00:23:22.743 ] 00:23:22.743 }, 00:23:22.743 { 00:23:22.743 "subsystem": "vmd", 00:23:22.743 "config": [] 00:23:22.743 }, 00:23:22.743 { 00:23:22.743 "subsystem": "accel", 00:23:22.743 "config": [ 00:23:22.743 { 00:23:22.743 "method": "accel_set_options", 00:23:22.743 "params": { 00:23:22.743 "small_cache_size": 128, 00:23:22.743 "large_cache_size": 16, 00:23:22.743 "task_count": 2048, 00:23:22.743 "sequence_count": 2048, 00:23:22.743 "buf_count": 2048 00:23:22.743 } 00:23:22.743 } 00:23:22.743 ] 00:23:22.743 }, 00:23:22.743 { 00:23:22.743 "subsystem": "bdev", 00:23:22.743 "config": [ 00:23:22.743 { 00:23:22.743 "method": "bdev_set_options", 00:23:22.743 "params": { 00:23:22.743 "bdev_io_pool_size": 65535, 00:23:22.743 "bdev_io_cache_size": 256, 00:23:22.743 "bdev_auto_examine": true, 00:23:22.743 "iobuf_small_cache_size": 128, 00:23:22.743 "iobuf_large_cache_size": 16 00:23:22.743 } 00:23:22.743 }, 00:23:22.743 { 00:23:22.743 "method": "bdev_raid_set_options", 00:23:22.743 "params": { 00:23:22.743 "process_window_size_kb": 1024 00:23:22.743 } 00:23:22.743 }, 00:23:22.743 { 00:23:22.743 "method": "bdev_iscsi_set_options", 00:23:22.743 "params": { 00:23:22.743 "timeout_sec": 30 00:23:22.743 } 00:23:22.743 }, 00:23:22.743 { 00:23:22.743 "method": "bdev_nvme_set_options", 00:23:22.743 "params": { 00:23:22.743 "action_on_timeout": "none", 00:23:22.743 "timeout_us": 0, 00:23:22.743 "timeout_admin_us": 0, 00:23:22.743 "keep_alive_timeout_ms": 10000, 00:23:22.743 "arbitration_burst": 0, 00:23:22.743 "low_priority_weight": 0, 00:23:22.743 "medium_priority_weight": 0, 00:23:22.743 "high_priority_weight": 0, 00:23:22.743 "nvme_adminq_poll_period_us": 10000, 00:23:22.743 "nvme_ioq_poll_period_us": 0, 00:23:22.743 "io_queue_requests": 0, 00:23:22.743 "delay_cmd_submit": true, 00:23:22.743 "transport_retry_count": 4, 00:23:22.743 "bdev_retry_count": 3, 00:23:22.743 "transport_ack_timeout": 0, 00:23:22.743 "ctrlr_loss_timeout_sec": 0, 00:23:22.743 "reconnect_delay_sec": 0, 00:23:22.743 "fast_io_fail_timeout_sec": 0, 00:23:22.743 "disable_auto_failback": false, 00:23:22.743 "generate_uuids": false, 00:23:22.743 "transport_tos": 0, 00:23:22.743 "nvme_error_stat": false, 00:23:22.743 "rdma_srq_size": 0, 00:23:22.743 "io_path_stat": false, 00:23:22.743 "allow_accel_sequence": false, 00:23:22.743 "rdma_max_cq_size": 0, 00:23:22.743 "rdma_cm_event_timeout_ms": 0, 00:23:22.743 "dhchap_digests": [ 00:23:22.743 "sha256", 00:23:22.743 "sha384", 00:23:22.743 "sha512" 00:23:22.743 ], 00:23:22.743 "dhchap_dhgroups": [ 00:23:22.744 "null", 00:23:22.744 "ffdhe2048", 00:23:22.744 "ffdhe3072", 00:23:22.744 "ffdhe4096", 00:23:22.744 "ffdhe6144", 00:23:22.744 "ffdhe8192" 00:23:22.744 ] 00:23:22.744 } 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "method": "bdev_nvme_set_hotplug", 00:23:22.744 "params": { 00:23:22.744 "period_us": 100000, 00:23:22.744 "enable": false 00:23:22.744 } 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "method": "bdev_malloc_create", 00:23:22.744 "params": { 00:23:22.744 "name": "malloc0", 00:23:22.744 "num_blocks": 8192, 00:23:22.744 "block_size": 4096, 00:23:22.744 "physical_block_size": 4096, 00:23:22.744 "uuid": "b080d70b-4809-42e6-85d8-2e42a171f2f6", 00:23:22.744 "optimal_io_boundary": 0 00:23:22.744 } 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "method": "bdev_wait_for_examine" 00:23:22.744 } 00:23:22.744 ] 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "subsystem": "nbd", 00:23:22.744 "config": [] 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "subsystem": "scheduler", 00:23:22.744 "config": [ 00:23:22.744 { 00:23:22.744 "method": "framework_set_scheduler", 00:23:22.744 "params": { 00:23:22.744 "name": "static" 00:23:22.744 } 00:23:22.744 } 00:23:22.744 ] 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "subsystem": "nvmf", 00:23:22.744 "config": [ 00:23:22.744 { 00:23:22.744 "method": "nvmf_set_config", 00:23:22.744 "params": { 00:23:22.744 "discovery_filter": "match_any", 00:23:22.744 "admin_cmd_passthru": { 00:23:22.744 "identify_ctrlr": false 00:23:22.744 } 00:23:22.744 } 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "method": "nvmf_set_max_subsystems", 00:23:22.744 "params": { 00:23:22.744 "max_subsystems": 1024 00:23:22.744 } 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "method": "nvmf_set_crdt", 00:23:22.744 "params": { 00:23:22.744 "crdt1": 0, 00:23:22.744 "crdt2": 0, 00:23:22.744 "crdt3": 0 00:23:22.744 } 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "method": "nvmf_create_transport", 00:23:22.744 "params": { 00:23:22.744 "trtype": "TCP", 00:23:22.744 "max_queue_depth": 128, 00:23:22.744 "max_io_qpairs_per_ctrlr": 127, 00:23:22.744 "in_capsule_data_size": 4096, 00:23:22.744 "max_io_size": 131072, 00:23:22.744 "io_unit_size": 131072, 00:23:22.744 "max_aq_depth": 128, 00:23:22.744 "num_shared_buffers": 511, 00:23:22.744 "buf_cache_size": 4294967295, 00:23:22.744 "dif_insert_or_strip": false, 00:23:22.744 "zcopy": false, 00:23:22.744 "c2h_success": false, 00:23:22.744 "sock_priority": 0, 00:23:22.744 "abort_timeout_sec": 1, 00:23:22.744 "ack_timeout": 0, 00:23:22.744 "data_wr_pool_size": 0 00:23:22.744 } 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "method": "nvmf_create_subsystem", 00:23:22.744 "params": { 00:23:22.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.744 "allow_any_host": false, 00:23:22.744 "serial_number": "SPDK00000000000001", 00:23:22.744 "model_number": "SPDK bdev Controller", 00:23:22.744 "max_namespaces": 10, 00:23:22.744 "min_cntlid": 1, 00:23:22.744 "max_cntlid": 65519, 00:23:22.744 "ana_reporting": false 00:23:22.744 } 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "method": "nvmf_subsystem_add_host", 00:23:22.744 "params": { 00:23:22.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.744 "host": "nqn.2016-06.io.spdk:host1", 00:23:22.744 "psk": "/tmp/tmp.LmJ62m0QS8" 00:23:22.744 } 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "method": "nvmf_subsystem_add_ns", 00:23:22.744 "params": { 00:23:22.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.744 "namespace": { 00:23:22.744 "nsid": 1, 00:23:22.744 "bdev_name": "malloc0", 00:23:22.744 "nguid": "B080D70B480942E685D82E42A171F2F6", 00:23:22.744 "uuid": "b080d70b-4809-42e6-85d8-2e42a171f2f6", 00:23:22.744 "no_auto_visible": false 00:23:22.744 } 00:23:22.744 } 00:23:22.744 }, 00:23:22.744 { 00:23:22.744 "method": "nvmf_subsystem_add_listener", 00:23:22.744 "params": { 00:23:22.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.744 "listen_address": { 00:23:22.744 "trtype": "TCP", 00:23:22.744 "adrfam": "IPv4", 00:23:22.744 "traddr": "10.0.0.2", 00:23:22.744 "trsvcid": "4420" 00:23:22.744 }, 00:23:22.744 "secure_channel": true 00:23:22.744 } 00:23:22.744 } 00:23:22.744 ] 00:23:22.744 } 00:23:22.744 ] 00:23:22.744 }' 00:23:22.744 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:22.744 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.744 13:59:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1496909 00:23:22.744 13:59:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:22.744 13:59:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1496909 00:23:22.744 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1496909 ']' 00:23:22.744 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.744 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:22.744 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.744 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:22.744 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.744 [2024-07-14 13:59:00.659287] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:22.744 [2024-07-14 13:59:00.659364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.744 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.002 [2024-07-14 13:59:00.725950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.002 [2024-07-14 13:59:00.811721] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.002 [2024-07-14 13:59:00.811776] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.002 [2024-07-14 13:59:00.811805] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.002 [2024-07-14 13:59:00.811816] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.002 [2024-07-14 13:59:00.811826] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.002 [2024-07-14 13:59:00.811925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.260 [2024-07-14 13:59:01.045859] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.260 [2024-07-14 13:59:01.061817] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:23.260 [2024-07-14 13:59:01.077896] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.260 [2024-07-14 13:59:01.097035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1497035 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1497035 /var/tmp/bdevperf.sock 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1497035 ']' 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.827 13:59:01 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:23.827 "subsystems": [ 00:23:23.827 { 00:23:23.827 "subsystem": "keyring", 00:23:23.827 "config": [] 00:23:23.827 }, 00:23:23.827 { 00:23:23.827 "subsystem": "iobuf", 00:23:23.827 "config": [ 00:23:23.827 { 00:23:23.827 "method": "iobuf_set_options", 00:23:23.827 "params": { 00:23:23.827 "small_pool_count": 8192, 00:23:23.827 "large_pool_count": 1024, 00:23:23.827 "small_bufsize": 8192, 00:23:23.827 "large_bufsize": 135168 00:23:23.827 } 00:23:23.827 } 00:23:23.827 ] 00:23:23.827 }, 00:23:23.827 { 00:23:23.827 "subsystem": "sock", 00:23:23.827 "config": [ 00:23:23.827 { 00:23:23.827 "method": "sock_set_default_impl", 00:23:23.827 "params": { 00:23:23.827 "impl_name": "posix" 00:23:23.827 } 00:23:23.827 }, 00:23:23.827 { 00:23:23.827 "method": "sock_impl_set_options", 00:23:23.827 "params": { 00:23:23.827 "impl_name": "ssl", 00:23:23.827 "recv_buf_size": 4096, 00:23:23.827 "send_buf_size": 4096, 00:23:23.827 "enable_recv_pipe": true, 00:23:23.827 "enable_quickack": false, 00:23:23.827 "enable_placement_id": 0, 00:23:23.827 "enable_zerocopy_send_server": true, 00:23:23.827 "enable_zerocopy_send_client": false, 00:23:23.827 "zerocopy_threshold": 0, 00:23:23.827 "tls_version": 0, 00:23:23.827 "enable_ktls": false 00:23:23.827 } 00:23:23.827 }, 00:23:23.827 { 00:23:23.828 "method": "sock_impl_set_options", 00:23:23.828 "params": { 00:23:23.828 "impl_name": "posix", 00:23:23.828 "recv_buf_size": 2097152, 00:23:23.828 "send_buf_size": 2097152, 00:23:23.828 "enable_recv_pipe": true, 00:23:23.828 "enable_quickack": false, 00:23:23.828 "enable_placement_id": 0, 00:23:23.828 "enable_zerocopy_send_server": true, 00:23:23.828 "enable_zerocopy_send_client": false, 00:23:23.828 "zerocopy_threshold": 0, 00:23:23.828 "tls_version": 0, 00:23:23.828 "enable_ktls": false 00:23:23.828 } 00:23:23.828 } 00:23:23.828 ] 00:23:23.828 }, 00:23:23.828 { 00:23:23.828 "subsystem": "vmd", 00:23:23.828 "config": [] 00:23:23.828 }, 00:23:23.828 { 00:23:23.828 "subsystem": "accel", 00:23:23.828 "config": [ 00:23:23.828 { 00:23:23.828 "method": "accel_set_options", 00:23:23.828 "params": { 00:23:23.828 "small_cache_size": 128, 00:23:23.828 "large_cache_size": 16, 00:23:23.828 "task_count": 2048, 00:23:23.828 "sequence_count": 2048, 00:23:23.828 "buf_count": 2048 00:23:23.828 } 00:23:23.828 } 00:23:23.828 ] 00:23:23.828 }, 00:23:23.828 { 00:23:23.828 "subsystem": "bdev", 00:23:23.828 "config": [ 00:23:23.828 { 00:23:23.828 "method": "bdev_set_options", 00:23:23.828 "params": { 00:23:23.828 "bdev_io_pool_size": 65535, 00:23:23.828 "bdev_io_cache_size": 256, 00:23:23.828 "bdev_auto_examine": true, 00:23:23.828 "iobuf_small_cache_size": 128, 00:23:23.828 "iobuf_large_cache_size": 16 00:23:23.828 } 00:23:23.828 }, 00:23:23.828 { 00:23:23.828 "method": "bdev_raid_set_options", 00:23:23.828 "params": { 00:23:23.828 "process_window_size_kb": 1024 00:23:23.828 } 00:23:23.828 }, 00:23:23.828 { 00:23:23.828 "method": "bdev_iscsi_set_options", 00:23:23.828 "params": { 00:23:23.828 "timeout_sec": 30 00:23:23.828 } 00:23:23.828 }, 00:23:23.828 { 00:23:23.828 "method": "bdev_nvme_set_options", 00:23:23.828 "params": { 00:23:23.828 "action_on_timeout": "none", 00:23:23.828 "timeout_us": 0, 00:23:23.828 "timeout_admin_us": 0, 00:23:23.828 "keep_alive_timeout_ms": 10000, 00:23:23.828 "arbitration_burst": 0, 00:23:23.828 "low_priority_weight": 0, 00:23:23.828 "medium_priority_weight": 0, 00:23:23.828 "high_priority_weight": 0, 00:23:23.828 "nvme_adminq_poll_period_us": 10000, 00:23:23.828 "nvme_ioq_poll_period_us": 0, 00:23:23.828 "io_queue_requests": 512, 00:23:23.828 "delay_cmd_submit": true, 00:23:23.828 "transport_retry_count": 4, 00:23:23.828 "bdev_retry_count": 3, 00:23:23.828 "transport_ack_timeout": 0, 00:23:23.828 "ctrlr_loss_timeout_sec": 0, 00:23:23.828 "reconnect_delay_sec": 0, 00:23:23.828 "fast_io_fail_timeout_sec": 0, 00:23:23.828 "disable_auto_failback": false, 00:23:23.828 "generate_uuids": false, 00:23:23.828 "transport_tos": 0, 00:23:23.828 "nvme_error_stat": false, 00:23:23.828 "rdma_srq_size": 0, 00:23:23.828 "io_path_stat": false, 00:23:23.828 "allow_accel_sequence": false, 00:23:23.828 "rdma_max_cq_size": 0, 00:23:23.828 "rdma_cm_event_timeout_ms": 0, 00:23:23.828 "dhchap_digests": [ 00:23:23.828 "sha256", 00:23:23.828 "sha384", 00:23:23.828 "sha512" 00:23:23.828 ], 00:23:23.828 "dhchap_dhgroups": [ 00:23:23.828 "null", 00:23:23.828 "ffdhe2048", 00:23:23.828 "ffdhe3072", 00:23:23.828 "ffdhe4096", 00:23:23.828 "ffdhe6144", 00:23:23.828 "ffdhe8192" 00:23:23.828 ] 00:23:23.828 } 00:23:23.828 }, 00:23:23.828 { 00:23:23.828 "method": "bdev_nvme_attach_controller", 00:23:23.828 "params": { 00:23:23.828 "name": "TLSTEST", 00:23:23.828 "trtype": "TCP", 00:23:23.828 "adrfam": "IPv4", 00:23:23.828 "traddr": "10.0.0.2", 00:23:23.828 "trsvcid": "4420", 00:23:23.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.828 "prchk_reftag": false, 00:23:23.828 "prchk_guard": false, 00:23:23.828 "ctrlr_loss_timeout_sec": 0, 00:23:23.828 "reconnect_delay_sec": 0, 00:23:23.828 "fast_io_fail_timeout_sec": 0, 00:23:23.828 "psk": "/tmp/tmp.LmJ62m0QS8", 00:23:23.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.828 "hdgst": false, 00:23:23.828 "ddgst": false 00:23:23.828 } 00:23:23.828 }, 00:23:23.828 { 00:23:23.828 "method": "bdev_nvme_set_hotplug", 00:23:23.828 "params": { 00:23:23.828 "period_us": 100000, 00:23:23.828 "enable": false 00:23:23.828 } 00:23:23.828 }, 00:23:23.828 { 00:23:23.828 "method": "bdev_wait_for_examine" 00:23:23.828 } 00:23:23.828 ] 00:23:23.828 }, 00:23:23.828 { 00:23:23.828 "subsystem": "nbd", 00:23:23.828 "config": [] 00:23:23.828 } 00:23:23.828 ] 00:23:23.828 }' 00:23:23.828 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.828 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.828 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.828 [2024-07-14 13:59:01.668664] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:23.828 [2024-07-14 13:59:01.668741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497035 ] 00:23:23.828 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.828 [2024-07-14 13:59:01.727983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.087 [2024-07-14 13:59:01.816970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.087 [2024-07-14 13:59:01.984092] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.087 [2024-07-14 13:59:01.984263] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:24.653 13:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.653 13:59:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:24.653 13:59:02 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:24.911 Running I/O for 10 seconds... 00:23:34.895 00:23:34.895 Latency(us) 00:23:34.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.895 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:34.895 Verification LBA range: start 0x0 length 0x2000 00:23:34.895 TLSTESTn1 : 10.03 3567.22 13.93 0.00 0.00 35809.54 8107.05 39030.33 00:23:34.895 =================================================================================================================== 00:23:34.895 Total : 3567.22 13.93 0.00 0.00 35809.54 8107.05 39030.33 00:23:34.895 0 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1497035 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1497035 ']' 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1497035 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1497035 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1497035' 00:23:34.895 killing process with pid 1497035 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1497035 00:23:34.895 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.895 00:23:34.895 Latency(us) 00:23:34.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.895 =================================================================================================================== 00:23:34.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.895 [2024-07-14 13:59:12.830502] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:34.895 13:59:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1497035 00:23:35.153 13:59:13 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1496909 00:23:35.153 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1496909 ']' 00:23:35.153 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1496909 00:23:35.153 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:35.153 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:35.153 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1496909 00:23:35.153 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:35.153 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:35.153 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1496909' 00:23:35.153 killing process with pid 1496909 00:23:35.153 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1496909 00:23:35.153 [2024-07-14 13:59:13.077579] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:35.153 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1496909 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1498495 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1498495 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1498495 ']' 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.414 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.414 [2024-07-14 13:59:13.372119] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:35.414 [2024-07-14 13:59:13.372209] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.672 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.672 [2024-07-14 13:59:13.441672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.672 [2024-07-14 13:59:13.529669] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.672 [2024-07-14 13:59:13.529738] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.672 [2024-07-14 13:59:13.529755] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.672 [2024-07-14 13:59:13.529768] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.672 [2024-07-14 13:59:13.529780] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.672 [2024-07-14 13:59:13.529816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.673 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.673 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:35.673 13:59:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.673 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.673 13:59:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.930 13:59:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.930 13:59:13 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.LmJ62m0QS8 00:23:35.930 13:59:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LmJ62m0QS8 00:23:35.930 13:59:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:35.930 [2024-07-14 13:59:13.879689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.930 13:59:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:36.186 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:36.443 [2024-07-14 13:59:14.377089] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.443 [2024-07-14 13:59:14.377348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.443 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:36.700 malloc0 00:23:36.700 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:36.957 13:59:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LmJ62m0QS8 00:23:37.215 [2024-07-14 13:59:15.118845] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:37.215 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1498660 00:23:37.215 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:37.215 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.215 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1498660 /var/tmp/bdevperf.sock 00:23:37.215 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1498660 ']' 00:23:37.215 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.215 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:37.215 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.215 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:37.215 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.215 [2024-07-14 13:59:15.182750] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:37.215 [2024-07-14 13:59:15.182825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498660 ] 00:23:37.474 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.474 [2024-07-14 13:59:15.243461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.474 [2024-07-14 13:59:15.330781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.474 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:37.474 13:59:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:37.474 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LmJ62m0QS8 00:23:37.732 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:37.989 [2024-07-14 13:59:15.901722] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.247 nvme0n1 00:23:38.247 13:59:15 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.247 Running I/O for 1 seconds... 00:23:39.180 00:23:39.180 Latency(us) 00:23:39.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.180 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:39.180 Verification LBA range: start 0x0 length 0x2000 00:23:39.180 nvme0n1 : 1.02 3428.22 13.39 0.00 0.00 36997.17 6699.24 32816.55 00:23:39.180 =================================================================================================================== 00:23:39.180 Total : 3428.22 13.39 0.00 0.00 36997.17 6699.24 32816.55 00:23:39.180 0 00:23:39.180 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1498660 00:23:39.180 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1498660 ']' 00:23:39.180 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1498660 00:23:39.180 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:39.180 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:39.180 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1498660 00:23:39.180 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:39.181 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:39.181 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1498660' 00:23:39.181 killing process with pid 1498660 00:23:39.181 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1498660 00:23:39.181 Received shutdown signal, test time was about 1.000000 seconds 00:23:39.181 00:23:39.181 Latency(us) 00:23:39.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.181 =================================================================================================================== 00:23:39.181 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.181 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1498660 00:23:39.438 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1498495 00:23:39.438 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1498495 ']' 00:23:39.439 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1498495 00:23:39.439 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:39.439 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:39.439 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1498495 00:23:39.439 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:39.439 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:39.439 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1498495' 00:23:39.439 killing process with pid 1498495 00:23:39.439 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1498495 00:23:39.439 [2024-07-14 13:59:17.412056] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:39.439 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1498495 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1498953 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1498953 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1498953 ']' 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:39.697 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.955 [2024-07-14 13:59:17.696768] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:39.955 [2024-07-14 13:59:17.696845] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.955 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.955 [2024-07-14 13:59:17.764271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.955 [2024-07-14 13:59:17.852185] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.955 [2024-07-14 13:59:17.852261] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.955 [2024-07-14 13:59:17.852277] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.955 [2024-07-14 13:59:17.852290] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.955 [2024-07-14 13:59:17.852301] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.955 [2024-07-14 13:59:17.852336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.213 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:40.213 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:40.213 13:59:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.213 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.213 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.213 13:59:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.213 13:59:17 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:40.213 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.213 13:59:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.213 [2024-07-14 13:59:17.999702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.213 malloc0 00:23:40.213 [2024-07-14 13:59:18.032018] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.213 [2024-07-14 13:59:18.032301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.213 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.213 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1499083 00:23:40.213 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:40.213 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1499083 /var/tmp/bdevperf.sock 00:23:40.213 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1499083 ']' 00:23:40.213 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.213 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:40.213 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.213 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:40.213 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.213 [2024-07-14 13:59:18.102115] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:40.213 [2024-07-14 13:59:18.102190] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499083 ] 00:23:40.213 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.213 [2024-07-14 13:59:18.163895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.471 [2024-07-14 13:59:18.255366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.471 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:40.471 13:59:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:40.471 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LmJ62m0QS8 00:23:40.729 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:40.987 [2024-07-14 13:59:18.896513] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.987 nvme0n1 00:23:41.245 13:59:18 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:41.245 Running I/O for 1 seconds... 00:23:42.179 00:23:42.179 Latency(us) 00:23:42.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.179 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:42.179 Verification LBA range: start 0x0 length 0x2000 00:23:42.179 nvme0n1 : 1.03 3317.39 12.96 0.00 0.00 38081.46 5946.79 48351.00 00:23:42.179 =================================================================================================================== 00:23:42.179 Total : 3317.39 12.96 0.00 0.00 38081.46 5946.79 48351.00 00:23:42.179 0 00:23:42.179 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:42.179 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.179 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.437 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.437 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:42.437 "subsystems": [ 00:23:42.437 { 00:23:42.437 "subsystem": "keyring", 00:23:42.437 "config": [ 00:23:42.437 { 00:23:42.437 "method": "keyring_file_add_key", 00:23:42.437 "params": { 00:23:42.437 "name": "key0", 00:23:42.437 "path": "/tmp/tmp.LmJ62m0QS8" 00:23:42.437 } 00:23:42.437 } 00:23:42.437 ] 00:23:42.437 }, 00:23:42.437 { 00:23:42.437 "subsystem": "iobuf", 00:23:42.437 "config": [ 00:23:42.437 { 00:23:42.437 "method": "iobuf_set_options", 00:23:42.437 "params": { 00:23:42.437 "small_pool_count": 8192, 00:23:42.437 "large_pool_count": 1024, 00:23:42.437 "small_bufsize": 8192, 00:23:42.437 "large_bufsize": 135168 00:23:42.437 } 00:23:42.437 } 00:23:42.437 ] 00:23:42.437 }, 00:23:42.437 { 00:23:42.437 "subsystem": "sock", 00:23:42.437 "config": [ 00:23:42.437 { 00:23:42.437 "method": "sock_set_default_impl", 00:23:42.437 "params": { 00:23:42.437 "impl_name": "posix" 00:23:42.437 } 00:23:42.437 }, 00:23:42.437 { 00:23:42.437 "method": "sock_impl_set_options", 00:23:42.437 "params": { 00:23:42.437 "impl_name": "ssl", 00:23:42.437 "recv_buf_size": 4096, 00:23:42.437 "send_buf_size": 4096, 00:23:42.437 "enable_recv_pipe": true, 00:23:42.437 "enable_quickack": false, 00:23:42.437 "enable_placement_id": 0, 00:23:42.437 "enable_zerocopy_send_server": true, 00:23:42.437 "enable_zerocopy_send_client": false, 00:23:42.437 "zerocopy_threshold": 0, 00:23:42.437 "tls_version": 0, 00:23:42.437 "enable_ktls": false 00:23:42.437 } 00:23:42.437 }, 00:23:42.437 { 00:23:42.437 "method": "sock_impl_set_options", 00:23:42.437 "params": { 00:23:42.437 "impl_name": "posix", 00:23:42.437 "recv_buf_size": 2097152, 00:23:42.437 "send_buf_size": 2097152, 00:23:42.437 "enable_recv_pipe": true, 00:23:42.437 "enable_quickack": false, 00:23:42.437 "enable_placement_id": 0, 00:23:42.437 "enable_zerocopy_send_server": true, 00:23:42.437 "enable_zerocopy_send_client": false, 00:23:42.437 "zerocopy_threshold": 0, 00:23:42.437 "tls_version": 0, 00:23:42.437 "enable_ktls": false 00:23:42.437 } 00:23:42.437 } 00:23:42.437 ] 00:23:42.437 }, 00:23:42.437 { 00:23:42.437 "subsystem": "vmd", 00:23:42.437 "config": [] 00:23:42.437 }, 00:23:42.437 { 00:23:42.437 "subsystem": "accel", 00:23:42.437 "config": [ 00:23:42.437 { 00:23:42.437 "method": "accel_set_options", 00:23:42.437 "params": { 00:23:42.437 "small_cache_size": 128, 00:23:42.437 "large_cache_size": 16, 00:23:42.437 "task_count": 2048, 00:23:42.437 "sequence_count": 2048, 00:23:42.437 "buf_count": 2048 00:23:42.437 } 00:23:42.437 } 00:23:42.437 ] 00:23:42.437 }, 00:23:42.437 { 00:23:42.437 "subsystem": "bdev", 00:23:42.437 "config": [ 00:23:42.437 { 00:23:42.437 "method": "bdev_set_options", 00:23:42.437 "params": { 00:23:42.437 "bdev_io_pool_size": 65535, 00:23:42.437 "bdev_io_cache_size": 256, 00:23:42.437 "bdev_auto_examine": true, 00:23:42.437 "iobuf_small_cache_size": 128, 00:23:42.437 "iobuf_large_cache_size": 16 00:23:42.437 } 00:23:42.437 }, 00:23:42.437 { 00:23:42.437 "method": "bdev_raid_set_options", 00:23:42.437 "params": { 00:23:42.437 "process_window_size_kb": 1024 00:23:42.437 } 00:23:42.437 }, 00:23:42.437 { 00:23:42.437 "method": "bdev_iscsi_set_options", 00:23:42.437 "params": { 00:23:42.437 "timeout_sec": 30 00:23:42.437 } 00:23:42.437 }, 00:23:42.437 { 00:23:42.437 "method": "bdev_nvme_set_options", 00:23:42.437 "params": { 00:23:42.437 "action_on_timeout": "none", 00:23:42.437 "timeout_us": 0, 00:23:42.437 "timeout_admin_us": 0, 00:23:42.437 "keep_alive_timeout_ms": 10000, 00:23:42.437 "arbitration_burst": 0, 00:23:42.437 "low_priority_weight": 0, 00:23:42.437 "medium_priority_weight": 0, 00:23:42.437 "high_priority_weight": 0, 00:23:42.437 "nvme_adminq_poll_period_us": 10000, 00:23:42.437 "nvme_ioq_poll_period_us": 0, 00:23:42.438 "io_queue_requests": 0, 00:23:42.438 "delay_cmd_submit": true, 00:23:42.438 "transport_retry_count": 4, 00:23:42.438 "bdev_retry_count": 3, 00:23:42.438 "transport_ack_timeout": 0, 00:23:42.438 "ctrlr_loss_timeout_sec": 0, 00:23:42.438 "reconnect_delay_sec": 0, 00:23:42.438 "fast_io_fail_timeout_sec": 0, 00:23:42.438 "disable_auto_failback": false, 00:23:42.438 "generate_uuids": false, 00:23:42.438 "transport_tos": 0, 00:23:42.438 "nvme_error_stat": false, 00:23:42.438 "rdma_srq_size": 0, 00:23:42.438 "io_path_stat": false, 00:23:42.438 "allow_accel_sequence": false, 00:23:42.438 "rdma_max_cq_size": 0, 00:23:42.438 "rdma_cm_event_timeout_ms": 0, 00:23:42.438 "dhchap_digests": [ 00:23:42.438 "sha256", 00:23:42.438 "sha384", 00:23:42.438 "sha512" 00:23:42.438 ], 00:23:42.438 "dhchap_dhgroups": [ 00:23:42.438 "null", 00:23:42.438 "ffdhe2048", 00:23:42.438 "ffdhe3072", 00:23:42.438 "ffdhe4096", 00:23:42.438 "ffdhe6144", 00:23:42.438 "ffdhe8192" 00:23:42.438 ] 00:23:42.438 } 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "method": "bdev_nvme_set_hotplug", 00:23:42.438 "params": { 00:23:42.438 "period_us": 100000, 00:23:42.438 "enable": false 00:23:42.438 } 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "method": "bdev_malloc_create", 00:23:42.438 "params": { 00:23:42.438 "name": "malloc0", 00:23:42.438 "num_blocks": 8192, 00:23:42.438 "block_size": 4096, 00:23:42.438 "physical_block_size": 4096, 00:23:42.438 "uuid": "e7ff73d3-2b77-4da0-ad10-15d002011f94", 00:23:42.438 "optimal_io_boundary": 0 00:23:42.438 } 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "method": "bdev_wait_for_examine" 00:23:42.438 } 00:23:42.438 ] 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "subsystem": "nbd", 00:23:42.438 "config": [] 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "subsystem": "scheduler", 00:23:42.438 "config": [ 00:23:42.438 { 00:23:42.438 "method": "framework_set_scheduler", 00:23:42.438 "params": { 00:23:42.438 "name": "static" 00:23:42.438 } 00:23:42.438 } 00:23:42.438 ] 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "subsystem": "nvmf", 00:23:42.438 "config": [ 00:23:42.438 { 00:23:42.438 "method": "nvmf_set_config", 00:23:42.438 "params": { 00:23:42.438 "discovery_filter": "match_any", 00:23:42.438 "admin_cmd_passthru": { 00:23:42.438 "identify_ctrlr": false 00:23:42.438 } 00:23:42.438 } 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "method": "nvmf_set_max_subsystems", 00:23:42.438 "params": { 00:23:42.438 "max_subsystems": 1024 00:23:42.438 } 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "method": "nvmf_set_crdt", 00:23:42.438 "params": { 00:23:42.438 "crdt1": 0, 00:23:42.438 "crdt2": 0, 00:23:42.438 "crdt3": 0 00:23:42.438 } 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "method": "nvmf_create_transport", 00:23:42.438 "params": { 00:23:42.438 "trtype": "TCP", 00:23:42.438 "max_queue_depth": 128, 00:23:42.438 "max_io_qpairs_per_ctrlr": 127, 00:23:42.438 "in_capsule_data_size": 4096, 00:23:42.438 "max_io_size": 131072, 00:23:42.438 "io_unit_size": 131072, 00:23:42.438 "max_aq_depth": 128, 00:23:42.438 "num_shared_buffers": 511, 00:23:42.438 "buf_cache_size": 4294967295, 00:23:42.438 "dif_insert_or_strip": false, 00:23:42.438 "zcopy": false, 00:23:42.438 "c2h_success": false, 00:23:42.438 "sock_priority": 0, 00:23:42.438 "abort_timeout_sec": 1, 00:23:42.438 "ack_timeout": 0, 00:23:42.438 "data_wr_pool_size": 0 00:23:42.438 } 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "method": "nvmf_create_subsystem", 00:23:42.438 "params": { 00:23:42.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.438 "allow_any_host": false, 00:23:42.438 "serial_number": "00000000000000000000", 00:23:42.438 "model_number": "SPDK bdev Controller", 00:23:42.438 "max_namespaces": 32, 00:23:42.438 "min_cntlid": 1, 00:23:42.438 "max_cntlid": 65519, 00:23:42.438 "ana_reporting": false 00:23:42.438 } 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "method": "nvmf_subsystem_add_host", 00:23:42.438 "params": { 00:23:42.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.438 "host": "nqn.2016-06.io.spdk:host1", 00:23:42.438 "psk": "key0" 00:23:42.438 } 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "method": "nvmf_subsystem_add_ns", 00:23:42.438 "params": { 00:23:42.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.438 "namespace": { 00:23:42.438 "nsid": 1, 00:23:42.438 "bdev_name": "malloc0", 00:23:42.438 "nguid": "E7FF73D32B774DA0AD1015D002011F94", 00:23:42.438 "uuid": "e7ff73d3-2b77-4da0-ad10-15d002011f94", 00:23:42.438 "no_auto_visible": false 00:23:42.438 } 00:23:42.438 } 00:23:42.438 }, 00:23:42.438 { 00:23:42.438 "method": "nvmf_subsystem_add_listener", 00:23:42.438 "params": { 00:23:42.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.438 "listen_address": { 00:23:42.438 "trtype": "TCP", 00:23:42.438 "adrfam": "IPv4", 00:23:42.438 "traddr": "10.0.0.2", 00:23:42.438 "trsvcid": "4420" 00:23:42.438 }, 00:23:42.438 "secure_channel": true 00:23:42.438 } 00:23:42.438 } 00:23:42.438 ] 00:23:42.438 } 00:23:42.438 ] 00:23:42.438 }' 00:23:42.438 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:42.697 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:42.697 "subsystems": [ 00:23:42.697 { 00:23:42.697 "subsystem": "keyring", 00:23:42.697 "config": [ 00:23:42.697 { 00:23:42.697 "method": "keyring_file_add_key", 00:23:42.697 "params": { 00:23:42.697 "name": "key0", 00:23:42.697 "path": "/tmp/tmp.LmJ62m0QS8" 00:23:42.697 } 00:23:42.697 } 00:23:42.697 ] 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "subsystem": "iobuf", 00:23:42.697 "config": [ 00:23:42.697 { 00:23:42.697 "method": "iobuf_set_options", 00:23:42.697 "params": { 00:23:42.697 "small_pool_count": 8192, 00:23:42.697 "large_pool_count": 1024, 00:23:42.697 "small_bufsize": 8192, 00:23:42.697 "large_bufsize": 135168 00:23:42.697 } 00:23:42.697 } 00:23:42.697 ] 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "subsystem": "sock", 00:23:42.697 "config": [ 00:23:42.697 { 00:23:42.697 "method": "sock_set_default_impl", 00:23:42.697 "params": { 00:23:42.697 "impl_name": "posix" 00:23:42.697 } 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "method": "sock_impl_set_options", 00:23:42.697 "params": { 00:23:42.697 "impl_name": "ssl", 00:23:42.697 "recv_buf_size": 4096, 00:23:42.697 "send_buf_size": 4096, 00:23:42.697 "enable_recv_pipe": true, 00:23:42.697 "enable_quickack": false, 00:23:42.697 "enable_placement_id": 0, 00:23:42.697 "enable_zerocopy_send_server": true, 00:23:42.697 "enable_zerocopy_send_client": false, 00:23:42.697 "zerocopy_threshold": 0, 00:23:42.697 "tls_version": 0, 00:23:42.697 "enable_ktls": false 00:23:42.697 } 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "method": "sock_impl_set_options", 00:23:42.697 "params": { 00:23:42.697 "impl_name": "posix", 00:23:42.697 "recv_buf_size": 2097152, 00:23:42.697 "send_buf_size": 2097152, 00:23:42.697 "enable_recv_pipe": true, 00:23:42.697 "enable_quickack": false, 00:23:42.697 "enable_placement_id": 0, 00:23:42.697 "enable_zerocopy_send_server": true, 00:23:42.697 "enable_zerocopy_send_client": false, 00:23:42.697 "zerocopy_threshold": 0, 00:23:42.697 "tls_version": 0, 00:23:42.697 "enable_ktls": false 00:23:42.697 } 00:23:42.697 } 00:23:42.697 ] 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "subsystem": "vmd", 00:23:42.697 "config": [] 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "subsystem": "accel", 00:23:42.697 "config": [ 00:23:42.697 { 00:23:42.697 "method": "accel_set_options", 00:23:42.697 "params": { 00:23:42.697 "small_cache_size": 128, 00:23:42.697 "large_cache_size": 16, 00:23:42.697 "task_count": 2048, 00:23:42.697 "sequence_count": 2048, 00:23:42.697 "buf_count": 2048 00:23:42.697 } 00:23:42.697 } 00:23:42.697 ] 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "subsystem": "bdev", 00:23:42.697 "config": [ 00:23:42.697 { 00:23:42.697 "method": "bdev_set_options", 00:23:42.697 "params": { 00:23:42.697 "bdev_io_pool_size": 65535, 00:23:42.697 "bdev_io_cache_size": 256, 00:23:42.697 "bdev_auto_examine": true, 00:23:42.697 "iobuf_small_cache_size": 128, 00:23:42.697 "iobuf_large_cache_size": 16 00:23:42.697 } 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "method": "bdev_raid_set_options", 00:23:42.697 "params": { 00:23:42.697 "process_window_size_kb": 1024 00:23:42.697 } 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "method": "bdev_iscsi_set_options", 00:23:42.697 "params": { 00:23:42.697 "timeout_sec": 30 00:23:42.697 } 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "method": "bdev_nvme_set_options", 00:23:42.697 "params": { 00:23:42.697 "action_on_timeout": "none", 00:23:42.697 "timeout_us": 0, 00:23:42.697 "timeout_admin_us": 0, 00:23:42.697 "keep_alive_timeout_ms": 10000, 00:23:42.697 "arbitration_burst": 0, 00:23:42.697 "low_priority_weight": 0, 00:23:42.697 "medium_priority_weight": 0, 00:23:42.697 "high_priority_weight": 0, 00:23:42.697 "nvme_adminq_poll_period_us": 10000, 00:23:42.697 "nvme_ioq_poll_period_us": 0, 00:23:42.697 "io_queue_requests": 512, 00:23:42.697 "delay_cmd_submit": true, 00:23:42.697 "transport_retry_count": 4, 00:23:42.697 "bdev_retry_count": 3, 00:23:42.697 "transport_ack_timeout": 0, 00:23:42.697 "ctrlr_loss_timeout_sec": 0, 00:23:42.697 "reconnect_delay_sec": 0, 00:23:42.697 "fast_io_fail_timeout_sec": 0, 00:23:42.697 "disable_auto_failback": false, 00:23:42.697 "generate_uuids": false, 00:23:42.697 "transport_tos": 0, 00:23:42.697 "nvme_error_stat": false, 00:23:42.697 "rdma_srq_size": 0, 00:23:42.697 "io_path_stat": false, 00:23:42.697 "allow_accel_sequence": false, 00:23:42.697 "rdma_max_cq_size": 0, 00:23:42.697 "rdma_cm_event_timeout_ms": 0, 00:23:42.697 "dhchap_digests": [ 00:23:42.697 "sha256", 00:23:42.697 "sha384", 00:23:42.697 "sha512" 00:23:42.697 ], 00:23:42.697 "dhchap_dhgroups": [ 00:23:42.697 "null", 00:23:42.697 "ffdhe2048", 00:23:42.697 "ffdhe3072", 00:23:42.697 "ffdhe4096", 00:23:42.697 "ffdhe6144", 00:23:42.697 "ffdhe8192" 00:23:42.697 ] 00:23:42.697 } 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "method": "bdev_nvme_attach_controller", 00:23:42.697 "params": { 00:23:42.697 "name": "nvme0", 00:23:42.697 "trtype": "TCP", 00:23:42.697 "adrfam": "IPv4", 00:23:42.697 "traddr": "10.0.0.2", 00:23:42.697 "trsvcid": "4420", 00:23:42.697 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.697 "prchk_reftag": false, 00:23:42.697 "prchk_guard": false, 00:23:42.697 "ctrlr_loss_timeout_sec": 0, 00:23:42.697 "reconnect_delay_sec": 0, 00:23:42.697 "fast_io_fail_timeout_sec": 0, 00:23:42.697 "psk": "key0", 00:23:42.697 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.697 "hdgst": false, 00:23:42.697 "ddgst": false 00:23:42.697 } 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "method": "bdev_nvme_set_hotplug", 00:23:42.697 "params": { 00:23:42.697 "period_us": 100000, 00:23:42.697 "enable": false 00:23:42.697 } 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "method": "bdev_enable_histogram", 00:23:42.697 "params": { 00:23:42.697 "name": "nvme0n1", 00:23:42.697 "enable": true 00:23:42.697 } 00:23:42.697 }, 00:23:42.697 { 00:23:42.697 "method": "bdev_wait_for_examine" 00:23:42.697 } 00:23:42.697 ] 00:23:42.697 }, 00:23:42.697 { 00:23:42.698 "subsystem": "nbd", 00:23:42.698 "config": [] 00:23:42.698 } 00:23:42.698 ] 00:23:42.698 }' 00:23:42.698 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1499083 00:23:42.698 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1499083 ']' 00:23:42.698 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1499083 00:23:42.698 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:42.698 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:42.698 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1499083 00:23:42.698 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:42.698 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:42.698 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1499083' 00:23:42.698 killing process with pid 1499083 00:23:42.698 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1499083 00:23:42.698 Received shutdown signal, test time was about 1.000000 seconds 00:23:42.698 00:23:42.698 Latency(us) 00:23:42.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.698 =================================================================================================================== 00:23:42.698 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.698 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1499083 00:23:42.955 13:59:20 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1498953 00:23:42.955 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1498953 ']' 00:23:42.955 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1498953 00:23:42.955 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:42.955 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:42.955 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1498953 00:23:42.955 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:42.955 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:42.955 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1498953' 00:23:42.955 killing process with pid 1498953 00:23:42.955 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1498953 00:23:42.955 13:59:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1498953 00:23:43.214 13:59:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:43.214 13:59:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:43.214 13:59:21 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:43.214 "subsystems": [ 00:23:43.214 { 00:23:43.214 "subsystem": "keyring", 00:23:43.214 "config": [ 00:23:43.214 { 00:23:43.214 "method": "keyring_file_add_key", 00:23:43.214 "params": { 00:23:43.214 "name": "key0", 00:23:43.214 "path": "/tmp/tmp.LmJ62m0QS8" 00:23:43.214 } 00:23:43.214 } 00:23:43.214 ] 00:23:43.214 }, 00:23:43.214 { 00:23:43.214 "subsystem": "iobuf", 00:23:43.214 "config": [ 00:23:43.214 { 00:23:43.214 "method": "iobuf_set_options", 00:23:43.214 "params": { 00:23:43.214 "small_pool_count": 8192, 00:23:43.214 "large_pool_count": 1024, 00:23:43.214 "small_bufsize": 8192, 00:23:43.214 "large_bufsize": 135168 00:23:43.214 } 00:23:43.214 } 00:23:43.214 ] 00:23:43.214 }, 00:23:43.214 { 00:23:43.214 "subsystem": "sock", 00:23:43.214 "config": [ 00:23:43.214 { 00:23:43.214 "method": "sock_set_default_impl", 00:23:43.214 "params": { 00:23:43.214 "impl_name": "posix" 00:23:43.214 } 00:23:43.214 }, 00:23:43.214 { 00:23:43.214 "method": "sock_impl_set_options", 00:23:43.214 "params": { 00:23:43.214 "impl_name": "ssl", 00:23:43.214 "recv_buf_size": 4096, 00:23:43.214 "send_buf_size": 4096, 00:23:43.214 "enable_recv_pipe": true, 00:23:43.214 "enable_quickack": false, 00:23:43.214 "enable_placement_id": 0, 00:23:43.214 "enable_zerocopy_send_server": true, 00:23:43.214 "enable_zerocopy_send_client": false, 00:23:43.214 "zerocopy_threshold": 0, 00:23:43.214 "tls_version": 0, 00:23:43.214 "enable_ktls": false 00:23:43.214 } 00:23:43.214 }, 00:23:43.214 { 00:23:43.214 "method": "sock_impl_set_options", 00:23:43.214 "params": { 00:23:43.214 "impl_name": "posix", 00:23:43.214 "recv_buf_size": 2097152, 00:23:43.214 "send_buf_size": 2097152, 00:23:43.214 "enable_recv_pipe": true, 00:23:43.214 "enable_quickack": false, 00:23:43.214 "enable_placement_id": 0, 00:23:43.214 "enable_zerocopy_send_server": true, 00:23:43.214 "enable_zerocopy_send_client": false, 00:23:43.214 "zerocopy_threshold": 0, 00:23:43.214 "tls_version": 0, 00:23:43.214 "enable_ktls": false 00:23:43.214 } 00:23:43.214 } 00:23:43.214 ] 00:23:43.214 }, 00:23:43.214 { 00:23:43.214 "subsystem": "vmd", 00:23:43.214 "config": [] 00:23:43.214 }, 00:23:43.214 { 00:23:43.214 "subsystem": "accel", 00:23:43.214 "config": [ 00:23:43.214 { 00:23:43.215 "method": "accel_set_options", 00:23:43.215 "params": { 00:23:43.215 "small_cache_size": 128, 00:23:43.215 "large_cache_size": 16, 00:23:43.215 "task_count": 2048, 00:23:43.215 "sequence_count": 2048, 00:23:43.215 "buf_count": 2048 00:23:43.215 } 00:23:43.215 } 00:23:43.215 ] 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "subsystem": "bdev", 00:23:43.215 "config": [ 00:23:43.215 { 00:23:43.215 "method": "bdev_set_options", 00:23:43.215 "params": { 00:23:43.215 "bdev_io_pool_size": 65535, 00:23:43.215 "bdev_io_cache_size": 256, 00:23:43.215 "bdev_auto_examine": true, 00:23:43.215 "iobuf_small_cache_size": 128, 00:23:43.215 "iobuf_large_cache_size": 16 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "bdev_raid_set_options", 00:23:43.215 "params": { 00:23:43.215 "process_window_size_kb": 1024 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "bdev_iscsi_set_options", 00:23:43.215 "params": { 00:23:43.215 "timeout_sec": 30 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "bdev_nvme_set_options", 00:23:43.215 "params": { 00:23:43.215 "action_on_timeout": "none", 00:23:43.215 "timeout_us": 0, 00:23:43.215 "timeout_admin_us": 0, 00:23:43.215 "keep_alive_timeout_ms": 10000, 00:23:43.215 "arbitration_burst": 0, 00:23:43.215 "low_priority_weight": 0, 00:23:43.215 "medium_priority_weight": 0, 00:23:43.215 "high_priority_weight": 0, 00:23:43.215 "nvme_adminq_poll_period_us": 10000, 00:23:43.215 "nvme_ioq_poll_period_us": 0, 00:23:43.215 "io_queue_requests": 0, 00:23:43.215 "delay_cmd_submit": true, 00:23:43.215 "transport_retry_count": 4, 00:23:43.215 "bdev_retry_count": 3, 00:23:43.215 "transport_ack_timeout": 0, 00:23:43.215 "ctrlr_loss_timeout_sec": 0, 00:23:43.215 "reconnect_delay_sec": 0, 00:23:43.215 "fast_io_fail_timeout_sec": 0, 00:23:43.215 "disable_auto_failback": false, 00:23:43.215 "generate_uuids": false, 00:23:43.215 "transport_tos": 0, 00:23:43.215 "nvme_error_stat": false, 00:23:43.215 "rdma_srq_size": 0, 00:23:43.215 "io_path_stat": false, 00:23:43.215 "allow_accel_sequence": false, 00:23:43.215 "rdma_max_cq_size": 0, 00:23:43.215 "rdma_cm_event_timeout_ms": 0, 00:23:43.215 "dhchap_digests": [ 00:23:43.215 "sha256", 00:23:43.215 "sha384", 00:23:43.215 "sha512" 00:23:43.215 ], 00:23:43.215 "dhchap_dhgroups": [ 00:23:43.215 "null", 00:23:43.215 "ffdhe2048", 00:23:43.215 "ffdhe3072", 00:23:43.215 "ffdhe4096", 00:23:43.215 "ffdhe6144", 00:23:43.215 "ffdhe8192" 00:23:43.215 ] 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "bdev_nvme_set_hotplug", 00:23:43.215 "params": { 00:23:43.215 "period_us": 100000, 00:23:43.215 "enable": false 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "bdev_malloc_create", 00:23:43.215 "params": { 00:23:43.215 "name": "malloc0", 00:23:43.215 "num_blocks": 8192, 00:23:43.215 "block_size": 4096, 00:23:43.215 "physical_block_size": 4096, 00:23:43.215 "uuid": "e7ff73d3-2b77-4da0-ad10-15d002011f94", 00:23:43.215 "optimal_io_boundary": 0 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "bdev_wait_for_examine" 00:23:43.215 } 00:23:43.215 ] 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "subsystem": "nbd", 00:23:43.215 "config": [] 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "subsystem": "scheduler", 00:23:43.215 "config": [ 00:23:43.215 { 00:23:43.215 "method": "framework_set_scheduler", 00:23:43.215 "params": { 00:23:43.215 "name": "static" 00:23:43.215 } 00:23:43.215 } 00:23:43.215 ] 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "subsystem": "nvmf", 00:23:43.215 "config": [ 00:23:43.215 { 00:23:43.215 "method": "nvmf_set_config", 00:23:43.215 "params": { 00:23:43.215 "discovery_filter": "match_any", 00:23:43.215 "admin_cmd_passthru": { 00:23:43.215 "identify_ctrlr": false 00:23:43.215 } 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "nvmf_set_max_subsystems", 00:23:43.215 "params": { 00:23:43.215 "max_subsystems": 1024 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "nvmf_set_crdt", 00:23:43.215 "params": { 00:23:43.215 "crdt1": 0, 00:23:43.215 "crdt2": 0, 00:23:43.215 "crdt3": 0 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "nvmf_create_transport", 00:23:43.215 "params": { 00:23:43.215 "trtype": "TCP", 00:23:43.215 "max_queue_depth": 128, 00:23:43.215 "max_io_qpairs_per_ctrlr": 127, 00:23:43.215 "in_capsule_data_size": 4096, 00:23:43.215 "max_io_size": 131072, 00:23:43.215 "io_unit_size": 131072, 00:23:43.215 "max_aq_depth": 128, 00:23:43.215 "num_shared_buffers": 511, 00:23:43.215 "buf_cache_size": 4294967295, 00:23:43.215 "dif_insert_or_strip": false, 00:23:43.215 "zcopy": false, 00:23:43.215 "c2h_success": false, 00:23:43.215 "sock_priority": 0, 00:23:43.215 "abort_timeout_sec": 1, 00:23:43.215 "ack_timeout": 0, 00:23:43.215 "data_wr_pool_size": 0 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "nvmf_create_subsystem", 00:23:43.215 "params": { 00:23:43.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.215 "allow_any_host": false, 00:23:43.215 "serial_number": "00000000000000000000", 00:23:43.215 "model_number": "SPDK bdev Controller", 00:23:43.215 "max_namespaces": 32, 00:23:43.215 "min_cntlid": 1, 00:23:43.215 "max_cntlid": 65519, 00:23:43.215 "ana_reporting": false 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "nvmf_subsystem_add_host", 00:23:43.215 "params": { 00:23:43.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.215 "host": "nqn.2016-06.io.spdk:host1", 00:23:43.215 "psk": "key0" 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "nvmf_subsystem_add_ns", 00:23:43.215 "params": { 00:23:43.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.215 "namespace": { 00:23:43.215 "nsid": 1, 00:23:43.215 "bdev_name": "malloc0", 00:23:43.215 "nguid": "E7FF73D32B774DA0AD1015D002011F94", 00:23:43.215 "uuid": "e7ff73d3-2b77-4da0-ad10-15d002011f94", 00:23:43.215 "no_auto_visible": false 00:23:43.215 } 00:23:43.215 } 00:23:43.215 }, 00:23:43.215 { 00:23:43.215 "method": "nvmf_subsystem_add_listener", 00:23:43.215 "params": { 00:23:43.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.215 "listen_address": { 00:23:43.215 "trtype": "TCP", 00:23:43.215 "adrfam": "IPv4", 00:23:43.215 "traddr": "10.0.0.2", 00:23:43.215 "trsvcid": "4420" 00:23:43.215 }, 00:23:43.215 "secure_channel": true 00:23:43.215 } 00:23:43.215 } 00:23:43.215 ] 00:23:43.215 } 00:23:43.215 ] 00:23:43.215 }' 00:23:43.215 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:43.215 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.215 13:59:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1499470 00:23:43.215 13:59:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:43.215 13:59:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1499470 00:23:43.215 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1499470 ']' 00:23:43.215 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.215 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:43.215 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.215 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:43.215 13:59:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.215 [2024-07-14 13:59:21.168398] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:43.215 [2024-07-14 13:59:21.168475] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.474 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.474 [2024-07-14 13:59:21.231300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.474 [2024-07-14 13:59:21.314599] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.474 [2024-07-14 13:59:21.314652] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.474 [2024-07-14 13:59:21.314679] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.474 [2024-07-14 13:59:21.314690] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.474 [2024-07-14 13:59:21.314701] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.474 [2024-07-14 13:59:21.314777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.732 [2024-07-14 13:59:21.557417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.732 [2024-07-14 13:59:21.589422] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.732 [2024-07-14 13:59:21.599102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1499536 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1499536 /var/tmp/bdevperf.sock 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1499536 ']' 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:44.297 13:59:22 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:44.297 "subsystems": [ 00:23:44.297 { 00:23:44.297 "subsystem": "keyring", 00:23:44.297 "config": [ 00:23:44.297 { 00:23:44.297 "method": "keyring_file_add_key", 00:23:44.297 "params": { 00:23:44.297 "name": "key0", 00:23:44.297 "path": "/tmp/tmp.LmJ62m0QS8" 00:23:44.297 } 00:23:44.297 } 00:23:44.297 ] 00:23:44.297 }, 00:23:44.297 { 00:23:44.297 "subsystem": "iobuf", 00:23:44.297 "config": [ 00:23:44.297 { 00:23:44.297 "method": "iobuf_set_options", 00:23:44.297 "params": { 00:23:44.297 "small_pool_count": 8192, 00:23:44.297 "large_pool_count": 1024, 00:23:44.297 "small_bufsize": 8192, 00:23:44.297 "large_bufsize": 135168 00:23:44.297 } 00:23:44.297 } 00:23:44.297 ] 00:23:44.297 }, 00:23:44.297 { 00:23:44.297 "subsystem": "sock", 00:23:44.297 "config": [ 00:23:44.297 { 00:23:44.297 "method": "sock_set_default_impl", 00:23:44.297 "params": { 00:23:44.297 "impl_name": "posix" 00:23:44.297 } 00:23:44.297 }, 00:23:44.297 { 00:23:44.297 "method": "sock_impl_set_options", 00:23:44.297 "params": { 00:23:44.297 "impl_name": "ssl", 00:23:44.297 "recv_buf_size": 4096, 00:23:44.297 "send_buf_size": 4096, 00:23:44.297 "enable_recv_pipe": true, 00:23:44.297 "enable_quickack": false, 00:23:44.297 "enable_placement_id": 0, 00:23:44.297 "enable_zerocopy_send_server": true, 00:23:44.297 "enable_zerocopy_send_client": false, 00:23:44.297 "zerocopy_threshold": 0, 00:23:44.297 "tls_version": 0, 00:23:44.297 "enable_ktls": false 00:23:44.297 } 00:23:44.297 }, 00:23:44.297 { 00:23:44.297 "method": "sock_impl_set_options", 00:23:44.297 "params": { 00:23:44.297 "impl_name": "posix", 00:23:44.297 "recv_buf_size": 2097152, 00:23:44.297 "send_buf_size": 2097152, 00:23:44.297 "enable_recv_pipe": true, 00:23:44.297 "enable_quickack": false, 00:23:44.297 "enable_placement_id": 0, 00:23:44.297 "enable_zerocopy_send_server": true, 00:23:44.297 "enable_zerocopy_send_client": false, 00:23:44.297 "zerocopy_threshold": 0, 00:23:44.297 "tls_version": 0, 00:23:44.297 "enable_ktls": false 00:23:44.297 } 00:23:44.297 } 00:23:44.297 ] 00:23:44.297 }, 00:23:44.297 { 00:23:44.297 "subsystem": "vmd", 00:23:44.297 "config": [] 00:23:44.297 }, 00:23:44.297 { 00:23:44.297 "subsystem": "accel", 00:23:44.297 "config": [ 00:23:44.297 { 00:23:44.297 "method": "accel_set_options", 00:23:44.297 "params": { 00:23:44.297 "small_cache_size": 128, 00:23:44.297 "large_cache_size": 16, 00:23:44.297 "task_count": 2048, 00:23:44.297 "sequence_count": 2048, 00:23:44.297 "buf_count": 2048 00:23:44.297 } 00:23:44.297 } 00:23:44.297 ] 00:23:44.297 }, 00:23:44.297 { 00:23:44.297 "subsystem": "bdev", 00:23:44.297 "config": [ 00:23:44.297 { 00:23:44.297 "method": "bdev_set_options", 00:23:44.297 "params": { 00:23:44.297 "bdev_io_pool_size": 65535, 00:23:44.297 "bdev_io_cache_size": 256, 00:23:44.297 "bdev_auto_examine": true, 00:23:44.297 "iobuf_small_cache_size": 128, 00:23:44.297 "iobuf_large_cache_size": 16 00:23:44.297 } 00:23:44.297 }, 00:23:44.297 { 00:23:44.297 "method": "bdev_raid_set_options", 00:23:44.297 "params": { 00:23:44.297 "process_window_size_kb": 1024 00:23:44.297 } 00:23:44.297 }, 00:23:44.297 { 00:23:44.297 "method": "bdev_iscsi_set_options", 00:23:44.297 "params": { 00:23:44.297 "timeout_sec": 30 00:23:44.297 } 00:23:44.297 }, 00:23:44.297 { 00:23:44.297 "method": "bdev_nvme_set_options", 00:23:44.297 "params": { 00:23:44.297 "action_on_timeout": "none", 00:23:44.297 "timeout_us": 0, 00:23:44.297 "timeout_admin_us": 0, 00:23:44.297 "keep_alive_timeout_ms": 10000, 00:23:44.297 "arbitration_burst": 0, 00:23:44.297 "low_priority_weight": 0, 00:23:44.297 "medium_priority_weight": 0, 00:23:44.297 "high_priority_weight": 0, 00:23:44.297 "nvme_adminq_poll_period_us": 10000, 00:23:44.297 "nvme_ioq_poll_period_us": 0, 00:23:44.297 "io_queue_requests": 512, 00:23:44.297 "delay_cmd_submit": true, 00:23:44.297 "transport_retry_count": 4, 00:23:44.297 "bdev_retry_count": 3, 00:23:44.297 "transport_ack_timeout": 0, 00:23:44.297 "ctrlr_loss_timeout_sec": 0, 00:23:44.297 "reconnect_delay_sec": 0, 00:23:44.297 "fast_io_fail_timeout_sec": 0, 00:23:44.297 "disable_auto_failback": false, 00:23:44.297 "generate_uuids": false, 00:23:44.297 "transport_tos": 0, 00:23:44.297 "nvme_error_stat": false, 00:23:44.297 "rdma_srq_size": 0, 00:23:44.297 "io_path_stat": false, 00:23:44.297 "allow_accel_sequence": false, 00:23:44.297 "rdma_max_cq_size": 0, 00:23:44.297 "rdma_cm_event_timeout_ms": 0, 00:23:44.297 "dhchap_digests": [ 00:23:44.297 "sha256", 00:23:44.297 "sha384", 00:23:44.297 "sha512" 00:23:44.297 ], 00:23:44.297 "dhchap_dhgroups": [ 00:23:44.297 "null", 00:23:44.297 "ffdhe2048", 00:23:44.297 "ffdhe3072", 00:23:44.297 "ffdhe4096", 00:23:44.297 "ffdhe6144", 00:23:44.297 "ffdhe8192" 00:23:44.297 ] 00:23:44.297 } 00:23:44.297 }, 00:23:44.297 { 00:23:44.297 "method": "bdev_nvme_attach_controller", 00:23:44.297 "params": { 00:23:44.297 "name": "nvme0", 00:23:44.297 "trtype": "TCP", 00:23:44.297 "adrfam": "IPv4", 00:23:44.297 "traddr": "10.0.0.2", 00:23:44.298 "trsvcid": "4420", 00:23:44.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.298 "prchk_reftag": false, 00:23:44.298 "prchk_guard": false, 00:23:44.298 "ctrlr_loss_timeout_sec": 0, 00:23:44.298 "reconnect_delay_sec": 0, 00:23:44.298 "fast_io_fail_timeout_sec": 0, 00:23:44.298 "psk": "key0", 00:23:44.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.298 "hdgst": false, 00:23:44.298 "ddgst": false 00:23:44.298 } 00:23:44.298 }, 00:23:44.298 { 00:23:44.298 "method": "bdev_nvme_set_hotplug", 00:23:44.298 "params": { 00:23:44.298 "period_us": 100000, 00:23:44.298 "enable": false 00:23:44.298 } 00:23:44.298 }, 00:23:44.298 { 00:23:44.298 "method": "bdev_enable_histogram", 00:23:44.298 "params": { 00:23:44.298 "name": "nvme0n1", 00:23:44.298 "enable": true 00:23:44.298 } 00:23:44.298 }, 00:23:44.298 { 00:23:44.298 "method": "bdev_wait_for_examine" 00:23:44.298 } 00:23:44.298 ] 00:23:44.298 }, 00:23:44.298 { 00:23:44.298 "subsystem": "nbd", 00:23:44.298 "config": [] 00:23:44.298 } 00:23:44.298 ] 00:23:44.298 }' 00:23:44.298 13:59:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.298 [2024-07-14 13:59:22.211189] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:44.298 [2024-07-14 13:59:22.211282] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499536 ] 00:23:44.298 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.298 [2024-07-14 13:59:22.274975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.556 [2024-07-14 13:59:22.365075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.813 [2024-07-14 13:59:22.547361] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.377 13:59:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:45.377 13:59:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:45.377 13:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:45.377 13:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:45.634 13:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.634 13:59:23 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.634 Running I/O for 1 seconds... 00:23:47.007 00:23:47.007 Latency(us) 00:23:47.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.007 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:47.007 Verification LBA range: start 0x0 length 0x2000 00:23:47.007 nvme0n1 : 1.02 3166.40 12.37 0.00 0.00 39987.62 9223.59 34564.17 00:23:47.007 =================================================================================================================== 00:23:47.007 Total : 3166.40 12.37 0.00 0.00 39987.62 9223.59 34564.17 00:23:47.007 0 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:47.007 nvmf_trace.0 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1499536 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1499536 ']' 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1499536 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1499536 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1499536' 00:23:47.007 killing process with pid 1499536 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1499536 00:23:47.007 Received shutdown signal, test time was about 1.000000 seconds 00:23:47.007 00:23:47.007 Latency(us) 00:23:47.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.007 =================================================================================================================== 00:23:47.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1499536 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.007 rmmod nvme_tcp 00:23:47.007 rmmod nvme_fabrics 00:23:47.007 rmmod nvme_keyring 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1499470 ']' 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1499470 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1499470 ']' 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1499470 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:47.007 13:59:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1499470 00:23:47.266 13:59:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:47.266 13:59:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:47.266 13:59:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1499470' 00:23:47.266 killing process with pid 1499470 00:23:47.267 13:59:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1499470 00:23:47.267 13:59:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1499470 00:23:47.267 13:59:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.267 13:59:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.267 13:59:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.267 13:59:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.267 13:59:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.267 13:59:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.267 13:59:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.267 13:59:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.803 13:59:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.803 13:59:27 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.2miYgOXVas /tmp/tmp.xiBQ6CFntu /tmp/tmp.LmJ62m0QS8 00:23:49.803 00:23:49.803 real 1m18.551s 00:23:49.803 user 2m9.243s 00:23:49.803 sys 0m24.094s 00:23:49.803 13:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:49.803 13:59:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.803 ************************************ 00:23:49.803 END TEST nvmf_tls 00:23:49.803 ************************************ 00:23:49.803 13:59:27 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:49.803 13:59:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:49.803 13:59:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:49.803 13:59:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.803 ************************************ 00:23:49.803 START TEST nvmf_fips 00:23:49.803 ************************************ 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:49.803 * Looking for test storage... 00:23:49.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:49.803 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:49.804 Error setting digest 00:23:49.804 00C2E98DFA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:49.804 00C2E98DFA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:49.804 13:59:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:51.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:51.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.764 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:51.765 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:51.765 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:23:51.765 00:23:51.765 --- 10.0.0.2 ping statistics --- 00:23:51.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.765 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:23:51.765 00:23:51.765 --- 10.0.0.1 ping statistics --- 00:23:51.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.765 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1501879 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1501879 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1501879 ']' 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:51.765 13:59:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:52.022 [2024-07-14 13:59:29.770923] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:52.022 [2024-07-14 13:59:29.771011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.022 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.022 [2024-07-14 13:59:29.834168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.022 [2024-07-14 13:59:29.918134] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.023 [2024-07-14 13:59:29.918202] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.023 [2024-07-14 13:59:29.918215] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.023 [2024-07-14 13:59:29.918241] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.023 [2024-07-14 13:59:29.918251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.023 [2024-07-14 13:59:29.918281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:52.953 13:59:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:53.210 [2024-07-14 13:59:31.003130] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.210 [2024-07-14 13:59:31.019128] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.210 [2024-07-14 13:59:31.019376] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.210 [2024-07-14 13:59:31.051720] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:53.210 malloc0 00:23:53.210 13:59:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.210 13:59:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1502035 00:23:53.210 13:59:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.210 13:59:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1502035 /var/tmp/bdevperf.sock 00:23:53.210 13:59:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1502035 ']' 00:23:53.210 13:59:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.210 13:59:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:53.210 13:59:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.210 13:59:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:53.210 13:59:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.210 [2024-07-14 13:59:31.144427] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:53.210 [2024-07-14 13:59:31.144502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502035 ] 00:23:53.210 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.468 [2024-07-14 13:59:31.203000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.468 [2024-07-14 13:59:31.287744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.468 13:59:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:53.468 13:59:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:53.468 13:59:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:53.725 [2024-07-14 13:59:31.619089] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.725 [2024-07-14 13:59:31.619253] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:53.725 TLSTESTn1 00:23:53.725 13:59:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:53.982 Running I/O for 10 seconds... 00:24:03.948 00:24:03.948 Latency(us) 00:24:03.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.948 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:03.948 Verification LBA range: start 0x0 length 0x2000 00:24:03.948 TLSTESTn1 : 10.02 3244.17 12.67 0.00 0.00 39391.94 6844.87 49516.09 00:24:03.948 =================================================================================================================== 00:24:03.948 Total : 3244.17 12.67 0.00 0.00 39391.94 6844.87 49516.09 00:24:03.948 0 00:24:03.948 13:59:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:03.948 13:59:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:03.948 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:24:03.948 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:24:03.948 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:03.948 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:03.948 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:03.948 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:03.948 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:03.948 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:03.948 nvmf_trace.0 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1502035 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1502035 ']' 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1502035 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1502035 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1502035' 00:24:04.206 killing process with pid 1502035 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1502035 00:24:04.206 Received shutdown signal, test time was about 10.000000 seconds 00:24:04.206 00:24:04.206 Latency(us) 00:24:04.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.206 =================================================================================================================== 00:24:04.206 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.206 [2024-07-14 13:59:41.982425] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:04.206 13:59:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1502035 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:04.464 rmmod nvme_tcp 00:24:04.464 rmmod nvme_fabrics 00:24:04.464 rmmod nvme_keyring 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1501879 ']' 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1501879 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1501879 ']' 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1501879 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1501879 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1501879' 00:24:04.464 killing process with pid 1501879 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1501879 00:24:04.464 [2024-07-14 13:59:42.300139] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:04.464 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1501879 00:24:04.723 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:04.723 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:04.723 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:04.723 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:04.723 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:04.723 13:59:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.723 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.723 13:59:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.622 13:59:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:06.622 13:59:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:06.622 00:24:06.622 real 0m17.260s 00:24:06.622 user 0m18.961s 00:24:06.622 sys 0m6.764s 00:24:06.622 13:59:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:06.622 13:59:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:06.622 ************************************ 00:24:06.622 END TEST nvmf_fips 00:24:06.622 ************************************ 00:24:06.879 13:59:44 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:06.879 13:59:44 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:06.879 13:59:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:06.879 13:59:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:06.879 13:59:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.879 ************************************ 00:24:06.879 START TEST nvmf_fuzz 00:24:06.879 ************************************ 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:06.879 * Looking for test storage... 00:24:06.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.879 13:59:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:06.880 13:59:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.780 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:08.780 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:08.781 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:08.781 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:08.781 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:08.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:24:08.781 00:24:08.781 --- 10.0.0.2 ping statistics --- 00:24:08.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.781 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:24:08.781 00:24:08.781 --- 10.0.0.1 ping statistics --- 00:24:08.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.781 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1505284 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1505284 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1505284 ']' 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.781 13:59:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:09.039 13:59:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.039 13:59:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:09.039 13:59:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.298 Malloc0 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:09.298 13:59:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:41.372 Fuzzing completed. Shutting down the fuzz application 00:24:41.372 00:24:41.372 Dumping successful admin opcodes: 00:24:41.372 8, 9, 10, 24, 00:24:41.372 Dumping successful io opcodes: 00:24:41.372 0, 9, 00:24:41.372 NS: 0x200003aeff00 I/O qp, Total commands completed: 433532, total successful commands: 2532, random_seed: 1910090048 00:24:41.372 NS: 0x200003aeff00 admin qp, Total commands completed: 52768, total successful commands: 422, random_seed: 87796608 00:24:41.372 14:00:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:41.632 Fuzzing completed. Shutting down the fuzz application 00:24:41.632 00:24:41.632 Dumping successful admin opcodes: 00:24:41.632 24, 00:24:41.632 Dumping successful io opcodes: 00:24:41.632 00:24:41.632 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3168268664 00:24:41.632 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3168390376 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:41.632 rmmod nvme_tcp 00:24:41.632 rmmod nvme_fabrics 00:24:41.632 rmmod nvme_keyring 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1505284 ']' 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1505284 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1505284 ']' 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 1505284 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1505284 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1505284' 00:24:41.632 killing process with pid 1505284 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 1505284 00:24:41.632 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 1505284 00:24:41.891 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:41.891 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:41.891 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:41.891 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:41.891 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:41.891 14:00:19 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.891 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.891 14:00:19 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.424 14:00:21 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:44.424 14:00:21 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:44.424 00:24:44.424 real 0m37.210s 00:24:44.424 user 0m51.721s 00:24:44.424 sys 0m14.768s 00:24:44.424 14:00:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:44.424 14:00:21 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.424 ************************************ 00:24:44.424 END TEST nvmf_fuzz 00:24:44.424 ************************************ 00:24:44.424 14:00:21 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:44.424 14:00:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:44.424 14:00:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:44.424 14:00:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:44.424 ************************************ 00:24:44.424 START TEST nvmf_multiconnection 00:24:44.424 ************************************ 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:44.424 * Looking for test storage... 00:24:44.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.424 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:44.425 14:00:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:46.324 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:46.324 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:46.324 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.324 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:46.325 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.325 14:00:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:46.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:24:46.325 00:24:46.325 --- 10.0.0.2 ping statistics --- 00:24:46.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.325 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:24:46.325 00:24:46.325 --- 10.0.0.1 ping statistics --- 00:24:46.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.325 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1511625 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1511625 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 1511625 ']' 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:46.325 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.325 [2024-07-14 14:00:24.084011] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:46.325 [2024-07-14 14:00:24.084087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.325 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.325 [2024-07-14 14:00:24.149506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.325 [2024-07-14 14:00:24.234875] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.325 [2024-07-14 14:00:24.234941] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.325 [2024-07-14 14:00:24.234955] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.325 [2024-07-14 14:00:24.234966] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.325 [2024-07-14 14:00:24.234975] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.325 [2024-07-14 14:00:24.235091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.325 [2024-07-14 14:00:24.235158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.325 [2024-07-14 14:00:24.235206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.325 [2024-07-14 14:00:24.235208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.581 [2024-07-14 14:00:24.390768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.581 Malloc1 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:46.581 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.582 [2024-07-14 14:00:24.448358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.582 Malloc2 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.582 Malloc3 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.582 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.839 Malloc4 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.839 Malloc5 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:46.839 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 Malloc6 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 Malloc7 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 Malloc8 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.840 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 Malloc9 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 Malloc10 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 Malloc11 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.098 14:00:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:47.662 14:00:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:47.662 14:00:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:47.662 14:00:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.662 14:00:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:47.662 14:00:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:50.193 14:00:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:50.193 14:00:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:50.193 14:00:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:50.193 14:00:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:50.193 14:00:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:50.193 14:00:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:50.193 14:00:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.193 14:00:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:50.451 14:00:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:50.451 14:00:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:50.451 14:00:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:50.451 14:00:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:50.451 14:00:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:52.347 14:00:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:52.347 14:00:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:52.347 14:00:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:52.347 14:00:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:52.347 14:00:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.347 14:00:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:52.347 14:00:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.347 14:00:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:53.277 14:00:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:53.277 14:00:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:53.277 14:00:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.277 14:00:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:53.277 14:00:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:55.168 14:00:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:55.168 14:00:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:55.168 14:00:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:55.168 14:00:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:55.168 14:00:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:55.168 14:00:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:55.168 14:00:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.168 14:00:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:55.737 14:00:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:55.738 14:00:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:55.738 14:00:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:55.738 14:00:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:55.738 14:00:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:58.257 14:00:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:58.257 14:00:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:58.257 14:00:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:58.257 14:00:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:58.257 14:00:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.257 14:00:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:58.257 14:00:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.257 14:00:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:58.821 14:00:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:58.821 14:00:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:58.821 14:00:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.821 14:00:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:58.821 14:00:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:00.711 14:00:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:00.711 14:00:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:00.711 14:00:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:25:00.711 14:00:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:00.711 14:00:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:00.711 14:00:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:00.711 14:00:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.711 14:00:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:01.642 14:00:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:01.642 14:00:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:01.642 14:00:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.642 14:00:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:01.642 14:00:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:03.560 14:00:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:03.560 14:00:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:03.560 14:00:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:25:03.560 14:00:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:03.560 14:00:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.560 14:00:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:03.560 14:00:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.560 14:00:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:04.122 14:00:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:04.122 14:00:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:04.122 14:00:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.122 14:00:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:04.122 14:00:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:06.641 14:00:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:06.642 14:00:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:06.642 14:00:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:25:06.642 14:00:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:06.642 14:00:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.642 14:00:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:06.642 14:00:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.642 14:00:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:06.901 14:00:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:06.901 14:00:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:06.901 14:00:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:06.901 14:00:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:06.901 14:00:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:09.424 14:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:09.424 14:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:09.424 14:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:09.424 14:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:09.424 14:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.424 14:00:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:09.424 14:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.424 14:00:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:09.989 14:00:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:09.989 14:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:09.989 14:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.989 14:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:09.989 14:00:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:11.887 14:00:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:11.887 14:00:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:11.887 14:00:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:11.887 14:00:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:11.887 14:00:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.887 14:00:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:11.887 14:00:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.887 14:00:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:12.820 14:00:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:12.820 14:00:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:12.820 14:00:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.820 14:00:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:12.820 14:00:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:14.719 14:00:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:14.719 14:00:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:14.719 14:00:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:14.719 14:00:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:14.719 14:00:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.719 14:00:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:14.719 14:00:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.719 14:00:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:15.650 14:00:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:15.650 14:00:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:15.650 14:00:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.650 14:00:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:15.650 14:00:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:17.545 14:00:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:17.545 14:00:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:17.545 14:00:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:17.545 14:00:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:17.545 14:00:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.545 14:00:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:17.545 14:00:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:17.545 [global] 00:25:17.545 thread=1 00:25:17.545 invalidate=1 00:25:17.545 rw=read 00:25:17.545 time_based=1 00:25:17.545 runtime=10 00:25:17.545 ioengine=libaio 00:25:17.545 direct=1 00:25:17.545 bs=262144 00:25:17.545 iodepth=64 00:25:17.545 norandommap=1 00:25:17.545 numjobs=1 00:25:17.545 00:25:17.545 [job0] 00:25:17.545 filename=/dev/nvme0n1 00:25:17.545 [job1] 00:25:17.545 filename=/dev/nvme10n1 00:25:17.545 [job2] 00:25:17.545 filename=/dev/nvme1n1 00:25:17.545 [job3] 00:25:17.545 filename=/dev/nvme2n1 00:25:17.545 [job4] 00:25:17.545 filename=/dev/nvme3n1 00:25:17.545 [job5] 00:25:17.545 filename=/dev/nvme4n1 00:25:17.546 [job6] 00:25:17.546 filename=/dev/nvme5n1 00:25:17.546 [job7] 00:25:17.546 filename=/dev/nvme6n1 00:25:17.546 [job8] 00:25:17.546 filename=/dev/nvme7n1 00:25:17.546 [job9] 00:25:17.546 filename=/dev/nvme8n1 00:25:17.546 [job10] 00:25:17.546 filename=/dev/nvme9n1 00:25:17.546 Could not set queue depth (nvme0n1) 00:25:17.546 Could not set queue depth (nvme10n1) 00:25:17.546 Could not set queue depth (nvme1n1) 00:25:17.546 Could not set queue depth (nvme2n1) 00:25:17.546 Could not set queue depth (nvme3n1) 00:25:17.546 Could not set queue depth (nvme4n1) 00:25:17.546 Could not set queue depth (nvme5n1) 00:25:17.546 Could not set queue depth (nvme6n1) 00:25:17.546 Could not set queue depth (nvme7n1) 00:25:17.546 Could not set queue depth (nvme8n1) 00:25:17.546 Could not set queue depth (nvme9n1) 00:25:17.803 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.803 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.803 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.803 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.803 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.803 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.803 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.803 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.803 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.803 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.803 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:17.803 fio-3.35 00:25:17.803 Starting 11 threads 00:25:29.997 00:25:29.997 job0: (groupid=0, jobs=1): err= 0: pid=1515803: Sun Jul 14 14:01:06 2024 00:25:29.997 read: IOPS=583, BW=146MiB/s (153MB/s)(1472MiB/10086msec) 00:25:29.997 slat (usec): min=8, max=115045, avg=969.85, stdev=4983.30 00:25:29.997 clat (usec): min=1092, max=307318, avg=108624.80, stdev=53394.74 00:25:29.997 lat (usec): min=1115, max=307386, avg=109594.65, stdev=54080.11 00:25:29.997 clat percentiles (msec): 00:25:29.997 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 26], 20.00th=[ 62], 00:25:29.997 | 30.00th=[ 84], 40.00th=[ 99], 50.00th=[ 111], 60.00th=[ 124], 00:25:29.997 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 180], 95.00th=[ 194], 00:25:29.997 | 99.00th=[ 211], 99.50th=[ 226], 99.90th=[ 239], 99.95th=[ 249], 00:25:29.997 | 99.99th=[ 309] 00:25:29.997 bw ( KiB/s): min=81408, max=281088, per=7.89%, avg=149021.30, stdev=58412.81, samples=20 00:25:29.997 iops : min= 318, max= 1098, avg=582.00, stdev=227.99, samples=20 00:25:29.997 lat (msec) : 2=0.17%, 4=0.99%, 10=4.23%, 20=3.74%, 50=5.01% 00:25:29.997 lat (msec) : 100=27.44%, 250=58.39%, 500=0.03% 00:25:29.997 cpu : usr=0.23%, sys=1.36%, ctx=1310, majf=0, minf=4097 00:25:29.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:29.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:29.997 issued rwts: total=5886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:29.997 job1: (groupid=0, jobs=1): err= 0: pid=1515830: Sun Jul 14 14:01:06 2024 00:25:29.997 read: IOPS=800, BW=200MiB/s (210MB/s)(2017MiB/10076msec) 00:25:29.997 slat (usec): min=12, max=110899, avg=1115.46, stdev=4282.62 00:25:29.997 clat (usec): min=1793, max=290919, avg=78750.07, stdev=44330.35 00:25:29.997 lat (usec): min=1820, max=290986, avg=79865.54, stdev=44953.50 00:25:29.997 clat percentiles (msec): 00:25:29.997 | 1.00th=[ 6], 5.00th=[ 25], 10.00th=[ 29], 20.00th=[ 43], 00:25:29.997 | 30.00th=[ 55], 40.00th=[ 63], 50.00th=[ 73], 60.00th=[ 83], 00:25:29.997 | 70.00th=[ 94], 80.00th=[ 107], 90.00th=[ 132], 95.00th=[ 188], 00:25:29.997 | 99.00th=[ 209], 99.50th=[ 220], 99.90th=[ 239], 99.95th=[ 271], 00:25:29.997 | 99.99th=[ 292] 00:25:29.997 bw ( KiB/s): min=114176, max=387072, per=10.85%, avg=204860.80, stdev=70988.09, samples=20 00:25:29.997 iops : min= 446, max= 1512, avg=800.15, stdev=277.27, samples=20 00:25:29.997 lat (msec) : 2=0.06%, 4=0.42%, 10=1.43%, 20=1.82%, 50=21.07% 00:25:29.997 lat (msec) : 100=50.38%, 250=24.71%, 500=0.10% 00:25:29.997 cpu : usr=0.45%, sys=2.41%, ctx=1411, majf=0, minf=4097 00:25:29.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:29.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:29.997 issued rwts: total=8068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:29.997 job2: (groupid=0, jobs=1): err= 0: pid=1515871: Sun Jul 14 14:01:06 2024 00:25:29.997 read: IOPS=645, BW=161MiB/s (169MB/s)(1635MiB/10130msec) 00:25:29.997 slat (usec): min=9, max=72612, avg=1152.57, stdev=4533.86 00:25:29.997 clat (msec): min=3, max=273, avg=97.87, stdev=50.61 00:25:29.997 lat (msec): min=3, max=284, avg=99.03, stdev=51.39 00:25:29.997 clat percentiles (msec): 00:25:29.997 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 36], 20.00th=[ 57], 00:25:29.997 | 30.00th=[ 69], 40.00th=[ 82], 50.00th=[ 92], 60.00th=[ 104], 00:25:29.997 | 70.00th=[ 125], 80.00th=[ 142], 90.00th=[ 174], 95.00th=[ 190], 00:25:29.997 | 99.00th=[ 209], 99.50th=[ 222], 99.90th=[ 245], 99.95th=[ 255], 00:25:29.997 | 99.99th=[ 275] 00:25:29.997 bw ( KiB/s): min=83456, max=301056, per=8.78%, avg=165796.50, stdev=62011.34, samples=20 00:25:29.997 iops : min= 326, max= 1176, avg=647.55, stdev=242.26, samples=20 00:25:29.997 lat (msec) : 4=0.14%, 10=3.21%, 20=3.56%, 50=8.84%, 100=41.90% 00:25:29.997 lat (msec) : 250=42.29%, 500=0.06% 00:25:29.997 cpu : usr=0.43%, sys=1.97%, ctx=1302, majf=0, minf=4097 00:25:29.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:29.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:29.997 issued rwts: total=6541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:29.997 job3: (groupid=0, jobs=1): err= 0: pid=1515886: Sun Jul 14 14:01:06 2024 00:25:29.997 read: IOPS=582, BW=146MiB/s (153MB/s)(1469MiB/10090msec) 00:25:29.997 slat (usec): min=8, max=85391, avg=1381.04, stdev=5196.91 00:25:29.997 clat (msec): min=7, max=247, avg=108.47, stdev=51.51 00:25:29.997 lat (msec): min=7, max=262, avg=109.85, stdev=52.36 00:25:29.997 clat percentiles (msec): 00:25:29.997 | 1.00th=[ 16], 5.00th=[ 38], 10.00th=[ 52], 20.00th=[ 61], 00:25:29.997 | 30.00th=[ 68], 40.00th=[ 81], 50.00th=[ 105], 60.00th=[ 126], 00:25:29.997 | 70.00th=[ 140], 80.00th=[ 161], 90.00th=[ 184], 95.00th=[ 199], 00:25:29.997 | 99.00th=[ 213], 99.50th=[ 215], 99.90th=[ 236], 99.95th=[ 241], 00:25:29.997 | 99.99th=[ 249] 00:25:29.997 bw ( KiB/s): min=79360, max=300032, per=7.88%, avg=148727.35, stdev=64398.85, samples=20 00:25:29.997 iops : min= 310, max= 1172, avg=580.90, stdev=251.58, samples=20 00:25:29.997 lat (msec) : 10=0.41%, 20=1.62%, 50=6.69%, 100=40.47%, 250=50.82% 00:25:29.997 cpu : usr=0.23%, sys=1.63%, ctx=1047, majf=0, minf=4097 00:25:29.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:29.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:29.997 issued rwts: total=5874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:29.997 job4: (groupid=0, jobs=1): err= 0: pid=1515892: Sun Jul 14 14:01:06 2024 00:25:29.997 read: IOPS=527, BW=132MiB/s (138MB/s)(1330MiB/10090msec) 00:25:29.997 slat (usec): min=10, max=117400, avg=1385.74, stdev=5578.34 00:25:29.997 clat (usec): min=977, max=268529, avg=119910.88, stdev=49270.01 00:25:29.997 lat (usec): min=1021, max=321098, avg=121296.61, stdev=50031.59 00:25:29.997 clat percentiles (msec): 00:25:29.997 | 1.00th=[ 5], 5.00th=[ 29], 10.00th=[ 51], 20.00th=[ 82], 00:25:29.997 | 30.00th=[ 97], 40.00th=[ 110], 50.00th=[ 124], 60.00th=[ 136], 00:25:29.997 | 70.00th=[ 146], 80.00th=[ 163], 90.00th=[ 184], 95.00th=[ 199], 00:25:29.997 | 99.00th=[ 220], 99.50th=[ 224], 99.90th=[ 239], 99.95th=[ 259], 00:25:29.997 | 99.99th=[ 271] 00:25:29.997 bw ( KiB/s): min=80384, max=206336, per=7.12%, avg=134518.45, stdev=28804.70, samples=20 00:25:29.997 iops : min= 314, max= 806, avg=525.40, stdev=112.53, samples=20 00:25:29.997 lat (usec) : 1000=0.04% 00:25:29.997 lat (msec) : 2=0.58%, 4=0.30%, 10=0.85%, 20=2.26%, 50=5.88% 00:25:29.997 lat (msec) : 100=22.67%, 250=67.36%, 500=0.06% 00:25:29.997 cpu : usr=0.28%, sys=1.70%, ctx=1188, majf=0, minf=3721 00:25:29.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:29.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:29.997 issued rwts: total=5319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:29.997 job5: (groupid=0, jobs=1): err= 0: pid=1515896: Sun Jul 14 14:01:06 2024 00:25:29.997 read: IOPS=976, BW=244MiB/s (256MB/s)(2448MiB/10025msec) 00:25:29.997 slat (usec): min=8, max=91700, avg=684.64, stdev=2932.89 00:25:29.997 clat (usec): min=1350, max=241090, avg=64810.77, stdev=36438.76 00:25:29.997 lat (usec): min=1364, max=241106, avg=65495.41, stdev=36688.81 00:25:29.997 clat percentiles (msec): 00:25:29.997 | 1.00th=[ 6], 5.00th=[ 19], 10.00th=[ 27], 20.00th=[ 35], 00:25:29.997 | 30.00th=[ 42], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 67], 00:25:29.997 | 70.00th=[ 74], 80.00th=[ 86], 90.00th=[ 115], 95.00th=[ 144], 00:25:29.997 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 224], 99.95th=[ 226], 00:25:29.997 | 99.99th=[ 241] 00:25:29.997 bw ( KiB/s): min=105683, max=476672, per=13.18%, avg=248955.20, stdev=96153.13, samples=20 00:25:29.997 iops : min= 412, max= 1862, avg=972.35, stdev=375.68, samples=20 00:25:29.997 lat (msec) : 2=0.10%, 4=0.49%, 10=1.61%, 20=4.08%, 50=30.18% 00:25:29.997 lat (msec) : 100=49.79%, 250=13.75% 00:25:29.997 cpu : usr=0.45%, sys=2.73%, ctx=1674, majf=0, minf=4097 00:25:29.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:29.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:29.997 issued rwts: total=9790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:29.997 job6: (groupid=0, jobs=1): err= 0: pid=1515897: Sun Jul 14 14:01:06 2024 00:25:29.998 read: IOPS=651, BW=163MiB/s (171MB/s)(1644MiB/10088msec) 00:25:29.998 slat (usec): min=12, max=154483, avg=1401.64, stdev=6025.60 00:25:29.998 clat (usec): min=1503, max=332077, avg=96738.74, stdev=56919.57 00:25:29.998 lat (usec): min=1526, max=332130, avg=98140.39, stdev=57942.12 00:25:29.998 clat percentiles (msec): 00:25:29.998 | 1.00th=[ 13], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 36], 00:25:29.998 | 30.00th=[ 46], 40.00th=[ 73], 50.00th=[ 89], 60.00th=[ 111], 00:25:29.998 | 70.00th=[ 132], 80.00th=[ 148], 90.00th=[ 182], 95.00th=[ 199], 00:25:29.998 | 99.00th=[ 230], 99.50th=[ 239], 99.90th=[ 284], 99.95th=[ 313], 00:25:29.998 | 99.99th=[ 334] 00:25:29.998 bw ( KiB/s): min=78336, max=467544, per=8.82%, avg=166624.00, stdev=100593.92, samples=20 00:25:29.998 iops : min= 306, max= 1826, avg=650.80, stdev=392.91, samples=20 00:25:29.998 lat (msec) : 2=0.08%, 4=0.27%, 10=0.37%, 20=1.37%, 50=28.31% 00:25:29.998 lat (msec) : 100=25.46%, 250=43.96%, 500=0.18% 00:25:29.998 cpu : usr=0.44%, sys=2.00%, ctx=1196, majf=0, minf=4097 00:25:29.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:29.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:29.998 issued rwts: total=6574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:29.998 job7: (groupid=0, jobs=1): err= 0: pid=1515898: Sun Jul 14 14:01:06 2024 00:25:29.998 read: IOPS=759, BW=190MiB/s (199MB/s)(1913MiB/10079msec) 00:25:29.998 slat (usec): min=9, max=47169, avg=1207.63, stdev=3740.43 00:25:29.998 clat (usec): min=1612, max=199293, avg=83039.00, stdev=30182.34 00:25:29.998 lat (usec): min=1662, max=199332, avg=84246.63, stdev=30676.67 00:25:29.998 clat percentiles (msec): 00:25:29.998 | 1.00th=[ 22], 5.00th=[ 43], 10.00th=[ 51], 20.00th=[ 57], 00:25:29.998 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 87], 00:25:29.998 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 126], 95.00th=[ 138], 00:25:29.998 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 186], 99.95th=[ 197], 00:25:29.998 | 99.99th=[ 201] 00:25:29.998 bw ( KiB/s): min=126464, max=318464, per=10.28%, avg=194194.65, stdev=52510.55, samples=20 00:25:29.998 iops : min= 494, max= 1244, avg=758.50, stdev=205.17, samples=20 00:25:29.998 lat (msec) : 2=0.04%, 4=0.13%, 10=0.05%, 20=0.69%, 50=9.14% 00:25:29.998 lat (msec) : 100=63.44%, 250=26.51% 00:25:29.998 cpu : usr=0.34%, sys=2.49%, ctx=1357, majf=0, minf=4097 00:25:29.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:29.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:29.998 issued rwts: total=7651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:29.998 job8: (groupid=0, jobs=1): err= 0: pid=1515899: Sun Jul 14 14:01:06 2024 00:25:29.998 read: IOPS=494, BW=124MiB/s (130MB/s)(1241MiB/10038msec) 00:25:29.998 slat (usec): min=9, max=131661, avg=1609.27, stdev=6587.49 00:25:29.998 clat (msec): min=8, max=324, avg=127.78, stdev=47.46 00:25:29.998 lat (msec): min=8, max=324, avg=129.39, stdev=48.59 00:25:29.998 clat percentiles (msec): 00:25:29.998 | 1.00th=[ 18], 5.00th=[ 48], 10.00th=[ 67], 20.00th=[ 88], 00:25:29.998 | 30.00th=[ 103], 40.00th=[ 114], 50.00th=[ 130], 60.00th=[ 142], 00:25:29.998 | 70.00th=[ 155], 80.00th=[ 171], 90.00th=[ 192], 95.00th=[ 203], 00:25:29.998 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 305], 99.95th=[ 309], 00:25:29.998 | 99.99th=[ 326] 00:25:29.998 bw ( KiB/s): min=72047, max=185485, per=6.64%, avg=125378.55, stdev=33980.89, samples=20 00:25:29.998 iops : min= 281, max= 724, avg=489.70, stdev=132.73, samples=20 00:25:29.998 lat (msec) : 10=0.12%, 20=1.29%, 50=3.73%, 100=23.58%, 250=71.04% 00:25:29.998 lat (msec) : 500=0.24% 00:25:29.998 cpu : usr=0.35%, sys=1.63%, ctx=1083, majf=0, minf=4097 00:25:29.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:29.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:29.998 issued rwts: total=4962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:29.998 job9: (groupid=0, jobs=1): err= 0: pid=1515901: Sun Jul 14 14:01:06 2024 00:25:29.998 read: IOPS=524, BW=131MiB/s (137MB/s)(1321MiB/10078msec) 00:25:29.998 slat (usec): min=9, max=125046, avg=1408.66, stdev=6183.27 00:25:29.998 clat (msec): min=20, max=306, avg=120.60, stdev=49.62 00:25:29.998 lat (msec): min=20, max=321, avg=122.01, stdev=50.40 00:25:29.998 clat percentiles (msec): 00:25:29.998 | 1.00th=[ 32], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 70], 00:25:29.998 | 30.00th=[ 85], 40.00th=[ 105], 50.00th=[ 118], 60.00th=[ 138], 00:25:29.998 | 70.00th=[ 148], 80.00th=[ 167], 90.00th=[ 192], 95.00th=[ 205], 00:25:29.998 | 99.00th=[ 224], 99.50th=[ 234], 99.90th=[ 279], 99.95th=[ 288], 00:25:29.998 | 99.99th=[ 305] 00:25:29.998 bw ( KiB/s): min=75113, max=230400, per=7.07%, avg=133596.30, stdev=45167.85, samples=20 00:25:29.998 iops : min= 293, max= 900, avg=521.80, stdev=176.44, samples=20 00:25:29.998 lat (msec) : 50=5.09%, 100=31.95%, 250=62.75%, 500=0.21% 00:25:29.998 cpu : usr=0.30%, sys=1.63%, ctx=1155, majf=0, minf=4097 00:25:29.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:29.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:29.998 issued rwts: total=5283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:29.998 job10: (groupid=0, jobs=1): err= 0: pid=1515902: Sun Jul 14 14:01:06 2024 00:25:29.998 read: IOPS=874, BW=219MiB/s (229MB/s)(2194MiB/10038msec) 00:25:29.998 slat (usec): min=9, max=78413, avg=1072.36, stdev=3519.46 00:25:29.998 clat (usec): min=991, max=261281, avg=72087.73, stdev=39136.01 00:25:29.998 lat (usec): min=1047, max=279473, avg=73160.09, stdev=39729.66 00:25:29.998 clat percentiles (msec): 00:25:29.998 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 30], 20.00th=[ 34], 00:25:29.998 | 30.00th=[ 46], 40.00th=[ 56], 50.00th=[ 67], 60.00th=[ 78], 00:25:29.998 | 70.00th=[ 89], 80.00th=[ 105], 90.00th=[ 128], 95.00th=[ 144], 00:25:29.998 | 99.00th=[ 188], 99.50th=[ 209], 99.90th=[ 232], 99.95th=[ 247], 00:25:29.998 | 99.99th=[ 262] 00:25:29.998 bw ( KiB/s): min=94208, max=459776, per=11.81%, avg=222946.40, stdev=107915.22, samples=20 00:25:29.998 iops : min= 368, max= 1796, avg=870.80, stdev=421.50, samples=20 00:25:29.998 lat (usec) : 1000=0.01% 00:25:29.998 lat (msec) : 2=0.40%, 4=0.52%, 10=0.92%, 20=2.09%, 50=30.10% 00:25:29.998 lat (msec) : 100=43.67%, 250=22.25%, 500=0.05% 00:25:29.998 cpu : usr=0.56%, sys=2.83%, ctx=1581, majf=0, minf=4097 00:25:29.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:29.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:29.998 issued rwts: total=8775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:29.998 00:25:29.998 Run status group 0 (all jobs): 00:25:29.998 READ: bw=1844MiB/s (1934MB/s), 124MiB/s-244MiB/s (130MB/s-256MB/s), io=18.2GiB (19.6GB), run=10025-10130msec 00:25:29.998 00:25:29.998 Disk stats (read/write): 00:25:29.998 nvme0n1: ios=11544/0, merge=0/0, ticks=1238178/0, in_queue=1238178, util=97.03% 00:25:29.998 nvme10n1: ios=15896/0, merge=0/0, ticks=1235891/0, in_queue=1235891, util=97.27% 00:25:29.998 nvme1n1: ios=12863/0, merge=0/0, ticks=1235493/0, in_queue=1235493, util=97.61% 00:25:29.998 nvme2n1: ios=11561/0, merge=0/0, ticks=1237326/0, in_queue=1237326, util=97.77% 00:25:29.998 nvme3n1: ios=10444/0, merge=0/0, ticks=1235120/0, in_queue=1235120, util=97.86% 00:25:29.998 nvme4n1: ios=19111/0, merge=0/0, ticks=1237779/0, in_queue=1237779, util=98.21% 00:25:29.998 nvme5n1: ios=12964/0, merge=0/0, ticks=1233766/0, in_queue=1233766, util=98.36% 00:25:29.998 nvme6n1: ios=15108/0, merge=0/0, ticks=1235861/0, in_queue=1235861, util=98.49% 00:25:29.998 nvme7n1: ios=9661/0, merge=0/0, ticks=1234816/0, in_queue=1234816, util=98.91% 00:25:29.998 nvme8n1: ios=10359/0, merge=0/0, ticks=1235110/0, in_queue=1235110, util=99.10% 00:25:29.998 nvme9n1: ios=17303/0, merge=0/0, ticks=1236177/0, in_queue=1236177, util=99.23% 00:25:29.998 14:01:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:29.998 [global] 00:25:29.998 thread=1 00:25:29.998 invalidate=1 00:25:29.998 rw=randwrite 00:25:29.998 time_based=1 00:25:29.998 runtime=10 00:25:29.998 ioengine=libaio 00:25:29.998 direct=1 00:25:29.998 bs=262144 00:25:29.998 iodepth=64 00:25:29.998 norandommap=1 00:25:29.998 numjobs=1 00:25:29.998 00:25:29.998 [job0] 00:25:29.998 filename=/dev/nvme0n1 00:25:29.998 [job1] 00:25:29.998 filename=/dev/nvme10n1 00:25:29.998 [job2] 00:25:29.998 filename=/dev/nvme1n1 00:25:29.998 [job3] 00:25:29.998 filename=/dev/nvme2n1 00:25:29.998 [job4] 00:25:29.998 filename=/dev/nvme3n1 00:25:29.998 [job5] 00:25:29.998 filename=/dev/nvme4n1 00:25:29.998 [job6] 00:25:29.998 filename=/dev/nvme5n1 00:25:29.998 [job7] 00:25:29.998 filename=/dev/nvme6n1 00:25:29.998 [job8] 00:25:29.998 filename=/dev/nvme7n1 00:25:29.998 [job9] 00:25:29.998 filename=/dev/nvme8n1 00:25:29.998 [job10] 00:25:29.998 filename=/dev/nvme9n1 00:25:29.998 Could not set queue depth (nvme0n1) 00:25:29.998 Could not set queue depth (nvme10n1) 00:25:29.998 Could not set queue depth (nvme1n1) 00:25:29.998 Could not set queue depth (nvme2n1) 00:25:29.998 Could not set queue depth (nvme3n1) 00:25:29.998 Could not set queue depth (nvme4n1) 00:25:29.998 Could not set queue depth (nvme5n1) 00:25:29.998 Could not set queue depth (nvme6n1) 00:25:29.998 Could not set queue depth (nvme7n1) 00:25:29.998 Could not set queue depth (nvme8n1) 00:25:29.998 Could not set queue depth (nvme9n1) 00:25:29.998 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.998 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.998 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.998 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.998 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.998 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.998 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.998 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.998 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.999 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.999 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:29.999 fio-3.35 00:25:29.999 Starting 11 threads 00:25:40.017 00:25:40.017 job0: (groupid=0, jobs=1): err= 0: pid=1516921: Sun Jul 14 14:01:16 2024 00:25:40.017 write: IOPS=676, BW=169MiB/s (177MB/s)(1709MiB/10097msec); 0 zone resets 00:25:40.017 slat (usec): min=18, max=75001, avg=904.88, stdev=3106.38 00:25:40.017 clat (usec): min=691, max=383872, avg=93605.02, stdev=71687.57 00:25:40.017 lat (usec): min=725, max=383963, avg=94509.90, stdev=72423.71 00:25:40.017 clat percentiles (msec): 00:25:40.017 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 16], 20.00th=[ 34], 00:25:40.017 | 30.00th=[ 41], 40.00th=[ 51], 50.00th=[ 70], 60.00th=[ 95], 00:25:40.017 | 70.00th=[ 129], 80.00th=[ 169], 90.00th=[ 203], 95.00th=[ 224], 00:25:40.017 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 359], 99.95th=[ 372], 00:25:40.017 | 99.99th=[ 384] 00:25:40.017 bw ( KiB/s): min=69632, max=371712, per=12.38%, avg=173377.05, stdev=86635.55, samples=20 00:25:40.017 iops : min= 272, max= 1452, avg=677.25, stdev=338.42, samples=20 00:25:40.017 lat (usec) : 750=0.01%, 1000=0.12% 00:25:40.017 lat (msec) : 2=0.41%, 4=0.89%, 10=3.76%, 20=8.09%, 50=26.47% 00:25:40.017 lat (msec) : 100=21.38%, 250=36.37%, 500=2.50% 00:25:40.017 cpu : usr=1.84%, sys=2.18%, ctx=4116, majf=0, minf=1 00:25:40.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:40.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.017 issued rwts: total=0,6835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.017 job1: (groupid=0, jobs=1): err= 0: pid=1516933: Sun Jul 14 14:01:16 2024 00:25:40.017 write: IOPS=471, BW=118MiB/s (124MB/s)(1199MiB/10168msec); 0 zone resets 00:25:40.017 slat (usec): min=22, max=63993, avg=1697.28, stdev=4263.35 00:25:40.017 clat (usec): min=930, max=357313, avg=133508.93, stdev=70008.07 00:25:40.017 lat (usec): min=965, max=357401, avg=135206.22, stdev=70948.12 00:25:40.017 clat percentiles (msec): 00:25:40.017 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 31], 20.00th=[ 67], 00:25:40.017 | 30.00th=[ 87], 40.00th=[ 118], 50.00th=[ 142], 60.00th=[ 165], 00:25:40.017 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 213], 95.00th=[ 241], 00:25:40.017 | 99.00th=[ 292], 99.50th=[ 309], 99.90th=[ 347], 99.95th=[ 347], 00:25:40.017 | 99.99th=[ 359] 00:25:40.017 bw ( KiB/s): min=63488, max=251392, per=8.65%, avg=121199.30, stdev=46480.48, samples=20 00:25:40.017 iops : min= 248, max= 982, avg=473.40, stdev=181.59, samples=20 00:25:40.017 lat (usec) : 1000=0.04% 00:25:40.017 lat (msec) : 2=0.15%, 4=0.44%, 10=3.13%, 20=3.44%, 50=9.42% 00:25:40.017 lat (msec) : 100=17.26%, 250=63.46%, 500=2.67% 00:25:40.017 cpu : usr=1.51%, sys=1.56%, ctx=2387, majf=0, minf=1 00:25:40.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:40.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.017 issued rwts: total=0,4797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.017 job2: (groupid=0, jobs=1): err= 0: pid=1516934: Sun Jul 14 14:01:16 2024 00:25:40.017 write: IOPS=418, BW=105MiB/s (110MB/s)(1059MiB/10121msec); 0 zone resets 00:25:40.017 slat (usec): min=18, max=47696, avg=2155.47, stdev=4407.02 00:25:40.017 clat (usec): min=1440, max=322752, avg=150713.24, stdev=50130.44 00:25:40.017 lat (usec): min=1483, max=322814, avg=152868.71, stdev=50875.55 00:25:40.017 clat percentiles (msec): 00:25:40.017 | 1.00th=[ 5], 5.00th=[ 50], 10.00th=[ 93], 20.00th=[ 123], 00:25:40.017 | 30.00th=[ 138], 40.00th=[ 146], 50.00th=[ 155], 60.00th=[ 165], 00:25:40.017 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 211], 00:25:40.017 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 321], 99.95th=[ 321], 00:25:40.017 | 99.99th=[ 321] 00:25:40.017 bw ( KiB/s): min=57344, max=152576, per=7.63%, avg=106787.05, stdev=22447.53, samples=20 00:25:40.017 iops : min= 224, max= 596, avg=417.10, stdev=87.71, samples=20 00:25:40.017 lat (msec) : 2=0.02%, 4=0.66%, 10=1.11%, 20=0.87%, 50=2.46% 00:25:40.017 lat (msec) : 100=6.31%, 250=85.36%, 500=3.21% 00:25:40.017 cpu : usr=1.24%, sys=1.33%, ctx=1592, majf=0, minf=1 00:25:40.017 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:40.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.017 issued rwts: total=0,4234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.017 job3: (groupid=0, jobs=1): err= 0: pid=1516935: Sun Jul 14 14:01:16 2024 00:25:40.017 write: IOPS=504, BW=126MiB/s (132MB/s)(1280MiB/10158msec); 0 zone resets 00:25:40.017 slat (usec): min=18, max=48780, avg=1475.39, stdev=4115.73 00:25:40.017 clat (usec): min=934, max=335380, avg=125397.61, stdev=78527.38 00:25:40.017 lat (usec): min=979, max=335445, avg=126873.01, stdev=79630.09 00:25:40.017 clat percentiles (msec): 00:25:40.017 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 12], 20.00th=[ 30], 00:25:40.017 | 30.00th=[ 65], 40.00th=[ 117], 50.00th=[ 144], 60.00th=[ 165], 00:25:40.017 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 215], 95.00th=[ 234], 00:25:40.017 | 99.00th=[ 288], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 334], 00:25:40.017 | 99.99th=[ 334] 00:25:40.018 bw ( KiB/s): min=61440, max=289280, per=9.24%, avg=129459.20, stdev=61485.63, samples=20 00:25:40.018 iops : min= 240, max= 1130, avg=505.70, stdev=240.18, samples=20 00:25:40.018 lat (usec) : 1000=0.04% 00:25:40.018 lat (msec) : 2=0.84%, 4=2.09%, 10=5.94%, 20=6.41%, 50=11.95% 00:25:40.018 lat (msec) : 100=9.61%, 250=59.57%, 500=3.55% 00:25:40.018 cpu : usr=1.53%, sys=1.54%, ctx=2985, majf=0, minf=1 00:25:40.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:40.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.018 issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.018 job4: (groupid=0, jobs=1): err= 0: pid=1516936: Sun Jul 14 14:01:16 2024 00:25:40.018 write: IOPS=602, BW=151MiB/s (158MB/s)(1518MiB/10077msec); 0 zone resets 00:25:40.018 slat (usec): min=17, max=62913, avg=1413.90, stdev=3242.86 00:25:40.018 clat (usec): min=1371, max=326278, avg=104738.29, stdev=54633.80 00:25:40.018 lat (usec): min=1430, max=326366, avg=106152.19, stdev=55286.23 00:25:40.018 clat percentiles (msec): 00:25:40.018 | 1.00th=[ 4], 5.00th=[ 27], 10.00th=[ 50], 20.00th=[ 55], 00:25:40.018 | 30.00th=[ 66], 40.00th=[ 85], 50.00th=[ 100], 60.00th=[ 113], 00:25:40.018 | 70.00th=[ 126], 80.00th=[ 157], 90.00th=[ 180], 95.00th=[ 203], 00:25:40.018 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 305], 99.95th=[ 317], 00:25:40.018 | 99.99th=[ 326] 00:25:40.018 bw ( KiB/s): min=71680, max=313344, per=10.98%, avg=153812.15, stdev=63179.43, samples=20 00:25:40.018 iops : min= 280, max= 1224, avg=600.80, stdev=246.83, samples=20 00:25:40.018 lat (msec) : 2=0.07%, 4=1.10%, 10=1.32%, 20=1.65%, 50=7.03% 00:25:40.018 lat (msec) : 100=39.53%, 250=48.53%, 500=0.77% 00:25:40.018 cpu : usr=1.77%, sys=2.03%, ctx=2452, majf=0, minf=1 00:25:40.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:40.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.018 issued rwts: total=0,6071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.018 job5: (groupid=0, jobs=1): err= 0: pid=1516937: Sun Jul 14 14:01:16 2024 00:25:40.018 write: IOPS=493, BW=123MiB/s (129MB/s)(1255MiB/10169msec); 0 zone resets 00:25:40.018 slat (usec): min=17, max=78822, avg=1259.64, stdev=3982.02 00:25:40.018 clat (usec): min=944, max=361405, avg=127959.31, stdev=73417.91 00:25:40.018 lat (usec): min=1048, max=361467, avg=129218.95, stdev=74271.73 00:25:40.018 clat percentiles (msec): 00:25:40.018 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 48], 00:25:40.018 | 30.00th=[ 87], 40.00th=[ 118], 50.00th=[ 138], 60.00th=[ 155], 00:25:40.018 | 70.00th=[ 171], 80.00th=[ 188], 90.00th=[ 222], 95.00th=[ 247], 00:25:40.018 | 99.00th=[ 284], 99.50th=[ 300], 99.90th=[ 351], 99.95th=[ 351], 00:25:40.018 | 99.99th=[ 363] 00:25:40.018 bw ( KiB/s): min=63488, max=191488, per=9.06%, avg=126939.10, stdev=41104.39, samples=20 00:25:40.018 iops : min= 248, max= 748, avg=495.85, stdev=160.56, samples=20 00:25:40.018 lat (usec) : 1000=0.02% 00:25:40.018 lat (msec) : 2=0.22%, 4=0.80%, 10=3.57%, 20=5.66%, 50=10.18% 00:25:40.018 lat (msec) : 100=15.14%, 250=59.75%, 500=4.68% 00:25:40.018 cpu : usr=1.41%, sys=1.60%, ctx=3182, majf=0, minf=1 00:25:40.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:40.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.018 issued rwts: total=0,5021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.018 job6: (groupid=0, jobs=1): err= 0: pid=1516938: Sun Jul 14 14:01:16 2024 00:25:40.018 write: IOPS=517, BW=129MiB/s (136MB/s)(1311MiB/10121msec); 0 zone resets 00:25:40.018 slat (usec): min=18, max=40480, avg=1543.65, stdev=3625.40 00:25:40.018 clat (usec): min=871, max=275256, avg=121970.16, stdev=58067.32 00:25:40.018 lat (usec): min=944, max=275322, avg=123513.81, stdev=58829.51 00:25:40.018 clat percentiles (msec): 00:25:40.018 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 41], 20.00th=[ 69], 00:25:40.018 | 30.00th=[ 89], 40.00th=[ 116], 50.00th=[ 132], 60.00th=[ 146], 00:25:40.018 | 70.00th=[ 157], 80.00th=[ 171], 90.00th=[ 190], 95.00th=[ 203], 00:25:40.018 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 271], 99.95th=[ 275], 00:25:40.018 | 99.99th=[ 275] 00:25:40.018 bw ( KiB/s): min=88064, max=256512, per=9.47%, avg=132597.70, stdev=49221.17, samples=20 00:25:40.018 iops : min= 344, max= 1002, avg=517.95, stdev=192.27, samples=20 00:25:40.018 lat (usec) : 1000=0.04% 00:25:40.018 lat (msec) : 2=0.21%, 4=0.92%, 10=2.27%, 20=3.66%, 50=7.44% 00:25:40.018 lat (msec) : 100=20.24%, 250=64.19%, 500=1.03% 00:25:40.018 cpu : usr=1.43%, sys=1.69%, ctx=2451, majf=0, minf=1 00:25:40.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:40.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.018 issued rwts: total=0,5242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.018 job7: (groupid=0, jobs=1): err= 0: pid=1516939: Sun Jul 14 14:01:16 2024 00:25:40.018 write: IOPS=548, BW=137MiB/s (144MB/s)(1381MiB/10073msec); 0 zone resets 00:25:40.018 slat (usec): min=25, max=52944, avg=1519.05, stdev=3525.43 00:25:40.018 clat (msec): min=2, max=328, avg=115.12, stdev=57.91 00:25:40.018 lat (msec): min=2, max=328, avg=116.64, stdev=58.65 00:25:40.018 clat percentiles (msec): 00:25:40.018 | 1.00th=[ 14], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 68], 00:25:40.018 | 30.00th=[ 83], 40.00th=[ 89], 50.00th=[ 107], 60.00th=[ 121], 00:25:40.018 | 70.00th=[ 144], 80.00th=[ 165], 90.00th=[ 186], 95.00th=[ 218], 00:25:40.018 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 321], 99.95th=[ 330], 00:25:40.018 | 99.99th=[ 330] 00:25:40.018 bw ( KiB/s): min=71680, max=278016, per=9.98%, avg=139786.45, stdev=57268.14, samples=20 00:25:40.018 iops : min= 280, max= 1086, avg=546.00, stdev=223.73, samples=20 00:25:40.018 lat (msec) : 4=0.09%, 10=0.58%, 20=1.18%, 50=11.28%, 100=33.37% 00:25:40.018 lat (msec) : 250=50.55%, 500=2.95% 00:25:40.018 cpu : usr=1.91%, sys=1.63%, ctx=2252, majf=0, minf=1 00:25:40.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:40.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.018 issued rwts: total=0,5523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.018 job8: (groupid=0, jobs=1): err= 0: pid=1516940: Sun Jul 14 14:01:16 2024 00:25:40.018 write: IOPS=426, BW=107MiB/s (112MB/s)(1079MiB/10120msec); 0 zone resets 00:25:40.018 slat (usec): min=15, max=33631, avg=1961.27, stdev=4092.08 00:25:40.018 clat (msec): min=4, max=302, avg=148.01, stdev=50.16 00:25:40.018 lat (msec): min=4, max=304, avg=149.97, stdev=50.78 00:25:40.018 clat percentiles (msec): 00:25:40.018 | 1.00th=[ 16], 5.00th=[ 56], 10.00th=[ 80], 20.00th=[ 115], 00:25:40.018 | 30.00th=[ 129], 40.00th=[ 144], 50.00th=[ 153], 60.00th=[ 159], 00:25:40.018 | 70.00th=[ 174], 80.00th=[ 188], 90.00th=[ 207], 95.00th=[ 230], 00:25:40.018 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 300], 00:25:40.018 | 99.99th=[ 305] 00:25:40.018 bw ( KiB/s): min=64000, max=174080, per=7.78%, avg=108913.70, stdev=26317.45, samples=20 00:25:40.018 iops : min= 250, max= 680, avg=425.40, stdev=102.79, samples=20 00:25:40.018 lat (msec) : 10=0.46%, 20=0.97%, 50=2.71%, 100=12.00%, 250=82.05% 00:25:40.018 lat (msec) : 500=1.81% 00:25:40.018 cpu : usr=1.56%, sys=1.12%, ctx=1737, majf=0, minf=1 00:25:40.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:40.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.018 issued rwts: total=0,4317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.018 job9: (groupid=0, jobs=1): err= 0: pid=1516941: Sun Jul 14 14:01:16 2024 00:25:40.018 write: IOPS=416, BW=104MiB/s (109MB/s)(1059MiB/10170msec); 0 zone resets 00:25:40.018 slat (usec): min=25, max=67270, avg=1961.04, stdev=4539.28 00:25:40.018 clat (msec): min=2, max=359, avg=151.65, stdev=63.29 00:25:40.018 lat (msec): min=2, max=359, avg=153.61, stdev=64.15 00:25:40.018 clat percentiles (msec): 00:25:40.018 | 1.00th=[ 13], 5.00th=[ 41], 10.00th=[ 61], 20.00th=[ 85], 00:25:40.018 | 30.00th=[ 132], 40.00th=[ 153], 50.00th=[ 163], 60.00th=[ 171], 00:25:40.018 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 220], 95.00th=[ 247], 00:25:40.018 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 359], 99.95th=[ 359], 00:25:40.018 | 99.99th=[ 359] 00:25:40.018 bw ( KiB/s): min=53248, max=215552, per=7.62%, avg=106777.60, stdev=40953.76, samples=20 00:25:40.018 iops : min= 208, max= 842, avg=417.10, stdev=159.98, samples=20 00:25:40.018 lat (msec) : 4=0.14%, 10=0.68%, 20=1.63%, 50=4.58%, 100=17.14% 00:25:40.018 lat (msec) : 250=71.43%, 500=4.39% 00:25:40.018 cpu : usr=1.27%, sys=1.37%, ctx=1910, majf=0, minf=1 00:25:40.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:40.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.018 issued rwts: total=0,4235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.018 job10: (groupid=0, jobs=1): err= 0: pid=1516942: Sun Jul 14 14:01:16 2024 00:25:40.018 write: IOPS=416, BW=104MiB/s (109MB/s)(1060MiB/10168msec); 0 zone resets 00:25:40.018 slat (usec): min=21, max=163422, avg=2050.73, stdev=5003.15 00:25:40.018 clat (usec): min=880, max=388327, avg=151376.53, stdev=68693.86 00:25:40.018 lat (usec): min=993, max=394484, avg=153427.27, stdev=69555.51 00:25:40.018 clat percentiles (usec): 00:25:40.018 | 1.00th=[ 1958], 5.00th=[ 21890], 10.00th=[ 42206], 20.00th=[ 70779], 00:25:40.018 | 30.00th=[145753], 40.00th=[154141], 50.00th=[164627], 60.00th=[177210], 00:25:40.018 | 70.00th=[189793], 80.00th=[200279], 90.00th=[217056], 95.00th=[238027], 00:25:40.018 | 99.00th=[316670], 99.50th=[346031], 99.90th=[383779], 99.95th=[387974], 00:25:40.018 | 99.99th=[387974] 00:25:40.018 bw ( KiB/s): min=73728, max=270848, per=7.63%, avg=106888.80, stdev=45251.34, samples=20 00:25:40.018 iops : min= 288, max= 1058, avg=417.50, stdev=176.78, samples=20 00:25:40.018 lat (usec) : 1000=0.14% 00:25:40.018 lat (msec) : 2=0.90%, 4=0.97%, 10=1.11%, 20=1.60%, 50=10.48% 00:25:40.018 lat (msec) : 100=7.31%, 250=73.57%, 500=3.92% 00:25:40.018 cpu : usr=1.31%, sys=1.24%, ctx=1725, majf=0, minf=1 00:25:40.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:40.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.018 issued rwts: total=0,4238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.018 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.018 00:25:40.019 Run status group 0 (all jobs): 00:25:40.019 WRITE: bw=1368MiB/s (1434MB/s), 104MiB/s-169MiB/s (109MB/s-177MB/s), io=13.6GiB (14.6GB), run=10073-10170msec 00:25:40.019 00:25:40.019 Disk stats (read/write): 00:25:40.019 nvme0n1: ios=49/13475, merge=0/0, ticks=38/1219278, in_queue=1219316, util=97.33% 00:25:40.019 nvme10n1: ios=41/9428, merge=0/0, ticks=1891/1206499, in_queue=1208390, util=100.00% 00:25:40.019 nvme1n1: ios=44/8292, merge=0/0, ticks=1150/1205652, in_queue=1206802, util=100.00% 00:25:40.019 nvme2n1: ios=44/10038, merge=0/0, ticks=1288/1215736, in_queue=1217024, util=100.00% 00:25:40.019 nvme3n1: ios=44/11910, merge=0/0, ticks=862/1215915, in_queue=1216777, util=100.00% 00:25:40.019 nvme4n1: ios=46/9879, merge=0/0, ticks=1816/1210038, in_queue=1211854, util=100.00% 00:25:40.019 nvme5n1: ios=29/10324, merge=0/0, ticks=182/1215247, in_queue=1215429, util=99.92% 00:25:40.019 nvme6n1: ios=36/10728, merge=0/0, ticks=736/1210061, in_queue=1210797, util=100.00% 00:25:40.019 nvme7n1: ios=0/8462, merge=0/0, ticks=0/1210523, in_queue=1210523, util=98.80% 00:25:40.019 nvme8n1: ios=38/8302, merge=0/0, ticks=774/1211057, in_queue=1211831, util=100.00% 00:25:40.019 nvme9n1: ios=45/8309, merge=0/0, ticks=990/1209376, in_queue=1210366, util=100.00% 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:40.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:40.019 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:40.019 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.019 14:01:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:40.276 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.276 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:40.532 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.532 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:40.789 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.789 14:01:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:41.352 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:41.353 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.353 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:41.609 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:41.609 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.609 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:41.867 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:41.867 rmmod nvme_tcp 00:25:41.867 rmmod nvme_fabrics 00:25:41.867 rmmod nvme_keyring 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1511625 ']' 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1511625 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 1511625 ']' 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 1511625 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1511625 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1511625' 00:25:41.867 killing process with pid 1511625 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 1511625 00:25:41.867 14:01:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 1511625 00:25:42.431 14:01:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.431 14:01:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:42.431 14:01:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:42.431 14:01:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.431 14:01:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.431 14:01:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.431 14:01:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.431 14:01:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.334 14:01:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.334 00:25:44.334 real 1m0.387s 00:25:44.334 user 3m26.146s 00:25:44.334 sys 0m22.630s 00:25:44.334 14:01:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:44.334 14:01:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.334 ************************************ 00:25:44.334 END TEST nvmf_multiconnection 00:25:44.334 ************************************ 00:25:44.334 14:01:22 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:44.334 14:01:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:44.334 14:01:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:44.334 14:01:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:44.591 ************************************ 00:25:44.591 START TEST nvmf_initiator_timeout 00:25:44.591 ************************************ 00:25:44.591 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:44.591 * Looking for test storage... 00:25:44.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:44.592 14:01:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:46.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:46.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.488 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:46.489 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:46.489 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.489 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:46.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:25:46.747 00:25:46.747 --- 10.0.0.2 ping statistics --- 00:25:46.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.747 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:25:46.747 00:25:46.747 --- 10.0.0.1 ping statistics --- 00:25:46.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.747 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1520402 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1520402 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 1520402 ']' 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:46.747 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.747 [2024-07-14 14:01:24.607767] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:46.747 [2024-07-14 14:01:24.607851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.747 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.747 [2024-07-14 14:01:24.671897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:47.006 [2024-07-14 14:01:24.758592] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.006 [2024-07-14 14:01:24.758656] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.006 [2024-07-14 14:01:24.758670] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.006 [2024-07-14 14:01:24.758681] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.006 [2024-07-14 14:01:24.758690] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.006 [2024-07-14 14:01:24.758817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.006 [2024-07-14 14:01:24.758905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.006 [2024-07-14 14:01:24.758908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.006 [2024-07-14 14:01:24.758846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.006 Malloc0 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.006 Delay0 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.006 [2024-07-14 14:01:24.951067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.006 [2024-07-14 14:01:24.979361] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.006 14:01:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:47.940 14:01:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:47.940 14:01:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:47.940 14:01:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:47.940 14:01:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:47.940 14:01:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:49.839 14:01:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:49.839 14:01:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:49.839 14:01:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:49.839 14:01:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:49.839 14:01:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:49.839 14:01:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:49.839 14:01:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1520709 00:25:49.839 14:01:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:49.839 14:01:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:49.839 [global] 00:25:49.839 thread=1 00:25:49.839 invalidate=1 00:25:49.839 rw=write 00:25:49.839 time_based=1 00:25:49.839 runtime=60 00:25:49.839 ioengine=libaio 00:25:49.839 direct=1 00:25:49.839 bs=4096 00:25:49.839 iodepth=1 00:25:49.839 norandommap=0 00:25:49.839 numjobs=1 00:25:49.839 00:25:49.839 verify_dump=1 00:25:49.839 verify_backlog=512 00:25:49.839 verify_state_save=0 00:25:49.839 do_verify=1 00:25:49.839 verify=crc32c-intel 00:25:49.839 [job0] 00:25:49.839 filename=/dev/nvme0n1 00:25:49.839 Could not set queue depth (nvme0n1) 00:25:49.839 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:49.839 fio-3.35 00:25:49.839 Starting 1 thread 00:25:53.116 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:53.116 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.116 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.116 true 00:25:53.116 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.116 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:53.116 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.116 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.116 true 00:25:53.117 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.117 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:53.117 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.117 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.117 true 00:25:53.117 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.117 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:53.117 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.117 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.117 true 00:25:53.117 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.117 14:01:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.395 true 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.395 true 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.395 true 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.395 true 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:56.395 14:01:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1520709 00:26:52.637 00:26:52.637 job0: (groupid=0, jobs=1): err= 0: pid=1520806: Sun Jul 14 14:02:27 2024 00:26:52.637 read: IOPS=100, BW=402KiB/s (412kB/s)(23.6MiB/60001msec) 00:26:52.637 slat (usec): min=4, max=6869, avg=14.80, stdev=88.66 00:26:52.637 clat (usec): min=221, max=40797k, avg=9698.36, stdev=525395.98 00:26:52.637 lat (usec): min=226, max=40797k, avg=9713.16, stdev=525396.10 00:26:52.637 clat percentiles (usec): 00:26:52.637 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 243], 00:26:52.637 | 20.00th=[ 249], 30.00th=[ 255], 40.00th=[ 262], 00:26:52.637 | 50.00th=[ 269], 60.00th=[ 281], 70.00th=[ 289], 00:26:52.637 | 80.00th=[ 297], 90.00th=[ 334], 95.00th=[ 41157], 00:26:52.637 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41157], 00:26:52.637 | 99.95th=[ 43779], 99.99th=[17112761] 00:26:52.637 write: IOPS=102, BW=410KiB/s (419kB/s)(24.0MiB/60001msec); 0 zone resets 00:26:52.637 slat (nsec): min=5917, max=81469, avg=13428.38, stdev=7504.08 00:26:52.637 clat (usec): min=164, max=495, avg=210.22, stdev=39.01 00:26:52.637 lat (usec): min=172, max=534, avg=223.65, stdev=42.90 00:26:52.637 clat percentiles (usec): 00:26:52.637 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:26:52.637 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:26:52.637 | 70.00th=[ 215], 80.00th=[ 231], 90.00th=[ 258], 95.00th=[ 314], 00:26:52.637 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 375], 99.95th=[ 379], 00:26:52.637 | 99.99th=[ 494] 00:26:52.637 bw ( KiB/s): min= 168, max= 8192, per=100.00%, avg=5632.00, stdev=2983.66, samples=8 00:26:52.637 iops : min= 42, max= 2048, avg=1408.00, stdev=745.92, samples=8 00:26:52.637 lat (usec) : 250=55.98%, 500=40.68%, 750=0.08% 00:26:52.637 lat (msec) : 2=0.01%, 50=3.24%, >=2000=0.01% 00:26:52.637 cpu : usr=0.15%, sys=0.30%, ctx=12176, majf=0, minf=2 00:26:52.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.637 issued rwts: total=6031,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:52.637 00:26:52.637 Run status group 0 (all jobs): 00:26:52.637 READ: bw=402KiB/s (412kB/s), 402KiB/s-402KiB/s (412kB/s-412kB/s), io=23.6MiB (24.7MB), run=60001-60001msec 00:26:52.637 WRITE: bw=410KiB/s (419kB/s), 410KiB/s-410KiB/s (419kB/s-419kB/s), io=24.0MiB (25.2MB), run=60001-60001msec 00:26:52.637 00:26:52.637 Disk stats (read/write): 00:26:52.637 nvme0n1: ios=5748/6144, merge=0/0, ticks=18895/1257, in_queue=20152, util=99.74% 00:26:52.637 14:02:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:52.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:52.637 nvmf hotplug test: fio successful as expected 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:52.637 rmmod nvme_tcp 00:26:52.637 rmmod nvme_fabrics 00:26:52.637 rmmod nvme_keyring 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1520402 ']' 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1520402 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 1520402 ']' 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 1520402 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1520402 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1520402' 00:26:52.637 killing process with pid 1520402 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 1520402 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 1520402 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.637 14:02:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.637 14:02:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:52.637 00:26:52.637 real 1m8.196s 00:26:52.637 user 4m10.823s 00:26:52.638 sys 0m6.508s 00:26:52.638 14:02:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:52.638 14:02:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.638 ************************************ 00:26:52.638 END TEST nvmf_initiator_timeout 00:26:52.638 ************************************ 00:26:52.638 14:02:30 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:52.638 14:02:30 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:52.638 14:02:30 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:52.638 14:02:30 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:52.638 14:02:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:55.167 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:55.167 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:55.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.167 14:02:32 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.168 14:02:32 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.168 14:02:32 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.168 14:02:32 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:55.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:55.168 14:02:32 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.168 14:02:32 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.168 14:02:32 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.168 14:02:32 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:55.168 14:02:32 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:55.168 14:02:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:55.168 14:02:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:55.168 14:02:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:55.168 ************************************ 00:26:55.168 START TEST nvmf_perf_adq 00:26:55.168 ************************************ 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:55.168 * Looking for test storage... 00:26:55.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:55.168 14:02:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:57.075 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:57.076 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:57.076 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:57.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:57.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:57.076 14:02:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:57.334 14:02:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:59.234 14:02:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:04.544 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:04.544 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:04.544 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:04.544 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.544 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:04.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:27:04.545 00:27:04.545 --- 10.0.0.2 ping statistics --- 00:27:04.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.545 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:27:04.545 00:27:04.545 --- 10.0.0.1 ping statistics --- 00:27:04.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.545 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1532309 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1532309 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1532309 ']' 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:04.545 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.545 [2024-07-14 14:02:42.399747] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:04.545 [2024-07-14 14:02:42.399828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.545 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.545 [2024-07-14 14:02:42.473692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:04.802 [2024-07-14 14:02:42.572126] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.802 [2024-07-14 14:02:42.572184] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.802 [2024-07-14 14:02:42.572201] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.802 [2024-07-14 14:02:42.572214] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.802 [2024-07-14 14:02:42.572225] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.802 [2024-07-14 14:02:42.572285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.802 [2024-07-14 14:02:42.572341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:04.802 [2024-07-14 14:02:42.572372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:04.802 [2024-07-14 14:02:42.572374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.802 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.802 [2024-07-14 14:02:42.781752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.059 Malloc1 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.059 [2024-07-14 14:02:42.835052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1532431 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:05.059 14:02:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:05.059 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.958 14:02:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:06.958 14:02:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.958 14:02:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:06.958 14:02:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.958 14:02:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:06.958 "tick_rate": 2700000000, 00:27:06.958 "poll_groups": [ 00:27:06.958 { 00:27:06.958 "name": "nvmf_tgt_poll_group_000", 00:27:06.958 "admin_qpairs": 1, 00:27:06.958 "io_qpairs": 1, 00:27:06.958 "current_admin_qpairs": 1, 00:27:06.958 "current_io_qpairs": 1, 00:27:06.958 "pending_bdev_io": 0, 00:27:06.958 "completed_nvme_io": 18761, 00:27:06.958 "transports": [ 00:27:06.958 { 00:27:06.958 "trtype": "TCP" 00:27:06.958 } 00:27:06.958 ] 00:27:06.958 }, 00:27:06.958 { 00:27:06.958 "name": "nvmf_tgt_poll_group_001", 00:27:06.958 "admin_qpairs": 0, 00:27:06.958 "io_qpairs": 1, 00:27:06.958 "current_admin_qpairs": 0, 00:27:06.958 "current_io_qpairs": 1, 00:27:06.958 "pending_bdev_io": 0, 00:27:06.958 "completed_nvme_io": 18155, 00:27:06.958 "transports": [ 00:27:06.958 { 00:27:06.958 "trtype": "TCP" 00:27:06.958 } 00:27:06.958 ] 00:27:06.958 }, 00:27:06.958 { 00:27:06.958 "name": "nvmf_tgt_poll_group_002", 00:27:06.958 "admin_qpairs": 0, 00:27:06.958 "io_qpairs": 1, 00:27:06.958 "current_admin_qpairs": 0, 00:27:06.958 "current_io_qpairs": 1, 00:27:06.958 "pending_bdev_io": 0, 00:27:06.958 "completed_nvme_io": 19483, 00:27:06.958 "transports": [ 00:27:06.958 { 00:27:06.958 "trtype": "TCP" 00:27:06.958 } 00:27:06.958 ] 00:27:06.958 }, 00:27:06.958 { 00:27:06.958 "name": "nvmf_tgt_poll_group_003", 00:27:06.958 "admin_qpairs": 0, 00:27:06.958 "io_qpairs": 1, 00:27:06.958 "current_admin_qpairs": 0, 00:27:06.958 "current_io_qpairs": 1, 00:27:06.958 "pending_bdev_io": 0, 00:27:06.958 "completed_nvme_io": 18858, 00:27:06.958 "transports": [ 00:27:06.958 { 00:27:06.958 "trtype": "TCP" 00:27:06.958 } 00:27:06.958 ] 00:27:06.958 } 00:27:06.958 ] 00:27:06.958 }' 00:27:06.958 14:02:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:06.958 14:02:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:06.958 14:02:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:06.958 14:02:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:06.958 14:02:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1532431 00:27:16.949 Initializing NVMe Controllers 00:27:16.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:16.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:16.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:16.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:16.949 Initialization complete. Launching workers. 00:27:16.949 ======================================================== 00:27:16.949 Latency(us) 00:27:16.949 Device Information : IOPS MiB/s Average min max 00:27:16.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10520.76 41.10 6083.19 2526.01 10035.47 00:27:16.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10206.47 39.87 6270.08 2558.71 10265.25 00:27:16.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10834.05 42.32 5907.81 2459.35 9719.22 00:27:16.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10411.06 40.67 6147.85 2488.09 10194.14 00:27:16.949 ======================================================== 00:27:16.949 Total : 41972.35 163.95 6099.40 2459.35 10265.25 00:27:16.949 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.949 rmmod nvme_tcp 00:27:16.949 rmmod nvme_fabrics 00:27:16.949 rmmod nvme_keyring 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1532309 ']' 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1532309 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1532309 ']' 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1532309 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1532309 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1532309' 00:27:16.949 killing process with pid 1532309 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1532309 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1532309 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.949 14:02:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.515 14:02:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:17.515 14:02:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:17.515 14:02:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:18.450 14:02:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:20.348 14:02:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:25.641 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:25.641 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:25.641 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:25.641 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:25.641 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:25.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:27:25.642 00:27:25.642 --- 10.0.0.2 ping statistics --- 00:27:25.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.642 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:27:25.642 00:27:25.642 --- 10.0.0.1 ping statistics --- 00:27:25.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.642 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:25.642 net.core.busy_poll = 1 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:25.642 net.core.busy_read = 1 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1535160 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1535160 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1535160 ']' 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.642 [2024-07-14 14:03:03.361839] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:25.642 [2024-07-14 14:03:03.361937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.642 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.642 [2024-07-14 14:03:03.432417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:25.642 [2024-07-14 14:03:03.523953] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.642 [2024-07-14 14:03:03.524004] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.642 [2024-07-14 14:03:03.524039] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.642 [2024-07-14 14:03:03.524052] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.642 [2024-07-14 14:03:03.524062] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.642 [2024-07-14 14:03:03.524119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.642 [2024-07-14 14:03:03.524144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.642 [2024-07-14 14:03:03.524385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.642 [2024-07-14 14:03:03.524388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.642 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.900 [2024-07-14 14:03:03.753739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.900 Malloc1 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:25.900 [2024-07-14 14:03:03.806392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1535194 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:25.900 14:03:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:25.900 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.426 14:03:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:28.426 14:03:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.426 14:03:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.426 14:03:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.426 14:03:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:28.426 "tick_rate": 2700000000, 00:27:28.426 "poll_groups": [ 00:27:28.426 { 00:27:28.426 "name": "nvmf_tgt_poll_group_000", 00:27:28.426 "admin_qpairs": 1, 00:27:28.426 "io_qpairs": 2, 00:27:28.426 "current_admin_qpairs": 1, 00:27:28.426 "current_io_qpairs": 2, 00:27:28.426 "pending_bdev_io": 0, 00:27:28.426 "completed_nvme_io": 25905, 00:27:28.426 "transports": [ 00:27:28.426 { 00:27:28.426 "trtype": "TCP" 00:27:28.426 } 00:27:28.426 ] 00:27:28.426 }, 00:27:28.426 { 00:27:28.426 "name": "nvmf_tgt_poll_group_001", 00:27:28.426 "admin_qpairs": 0, 00:27:28.426 "io_qpairs": 2, 00:27:28.426 "current_admin_qpairs": 0, 00:27:28.426 "current_io_qpairs": 2, 00:27:28.426 "pending_bdev_io": 0, 00:27:28.426 "completed_nvme_io": 26264, 00:27:28.426 "transports": [ 00:27:28.426 { 00:27:28.426 "trtype": "TCP" 00:27:28.426 } 00:27:28.426 ] 00:27:28.426 }, 00:27:28.426 { 00:27:28.426 "name": "nvmf_tgt_poll_group_002", 00:27:28.426 "admin_qpairs": 0, 00:27:28.426 "io_qpairs": 0, 00:27:28.426 "current_admin_qpairs": 0, 00:27:28.426 "current_io_qpairs": 0, 00:27:28.426 "pending_bdev_io": 0, 00:27:28.426 "completed_nvme_io": 0, 00:27:28.426 "transports": [ 00:27:28.426 { 00:27:28.426 "trtype": "TCP" 00:27:28.426 } 00:27:28.426 ] 00:27:28.426 }, 00:27:28.426 { 00:27:28.426 "name": "nvmf_tgt_poll_group_003", 00:27:28.426 "admin_qpairs": 0, 00:27:28.426 "io_qpairs": 0, 00:27:28.426 "current_admin_qpairs": 0, 00:27:28.426 "current_io_qpairs": 0, 00:27:28.426 "pending_bdev_io": 0, 00:27:28.426 "completed_nvme_io": 0, 00:27:28.426 "transports": [ 00:27:28.426 { 00:27:28.426 "trtype": "TCP" 00:27:28.426 } 00:27:28.426 ] 00:27:28.426 } 00:27:28.426 ] 00:27:28.426 }' 00:27:28.426 14:03:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:28.426 14:03:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:28.426 14:03:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:28.426 14:03:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:28.426 14:03:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1535194 00:27:36.529 Initializing NVMe Controllers 00:27:36.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:36.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:36.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:36.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:36.529 Initialization complete. Launching workers. 00:27:36.529 ======================================================== 00:27:36.529 Latency(us) 00:27:36.529 Device Information : IOPS MiB/s Average min max 00:27:36.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6273.40 24.51 10207.12 1742.13 54446.57 00:27:36.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6510.50 25.43 9847.60 1675.66 54826.31 00:27:36.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7516.90 29.36 8518.07 1305.80 55307.98 00:27:36.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7065.40 27.60 9057.90 1626.62 54832.98 00:27:36.529 ======================================================== 00:27:36.529 Total : 27366.20 106.90 9360.94 1305.80 55307.98 00:27:36.529 00:27:36.529 14:03:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:36.529 14:03:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.529 14:03:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:36.529 14:03:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.529 14:03:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:36.529 14:03:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.529 14:03:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.529 rmmod nvme_tcp 00:27:36.529 rmmod nvme_fabrics 00:27:36.529 rmmod nvme_keyring 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1535160 ']' 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1535160 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1535160 ']' 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1535160 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1535160 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1535160' 00:27:36.529 killing process with pid 1535160 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1535160 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1535160 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.529 14:03:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.429 14:03:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:38.429 14:03:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:38.429 00:27:38.429 real 0m43.724s 00:27:38.429 user 2m38.337s 00:27:38.429 sys 0m10.180s 00:27:38.429 14:03:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:38.429 14:03:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.429 ************************************ 00:27:38.429 END TEST nvmf_perf_adq 00:27:38.429 ************************************ 00:27:38.429 14:03:16 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:38.429 14:03:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:38.429 14:03:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:38.429 14:03:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:38.429 ************************************ 00:27:38.429 START TEST nvmf_shutdown 00:27:38.429 ************************************ 00:27:38.429 14:03:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:38.688 * Looking for test storage... 00:27:38.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:38.688 ************************************ 00:27:38.688 START TEST nvmf_shutdown_tc1 00:27:38.688 ************************************ 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.688 14:03:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:40.590 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:40.590 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:40.590 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:40.591 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:40.591 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.591 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:40.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:27:40.850 00:27:40.850 --- 10.0.0.2 ping statistics --- 00:27:40.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.850 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:27:40.850 00:27:40.850 --- 10.0.0.1 ping statistics --- 00:27:40.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.850 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1538844 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1538844 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1538844 ']' 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:40.850 14:03:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:40.850 [2024-07-14 14:03:18.734069] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:40.850 [2024-07-14 14:03:18.734160] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.850 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.850 [2024-07-14 14:03:18.806559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:41.107 [2024-07-14 14:03:18.896601] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.107 [2024-07-14 14:03:18.896669] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.107 [2024-07-14 14:03:18.896696] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.107 [2024-07-14 14:03:18.896707] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.108 [2024-07-14 14:03:18.896717] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.108 [2024-07-14 14:03:18.896767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.108 [2024-07-14 14:03:18.897036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:41.108 [2024-07-14 14:03:18.897097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:41.108 [2024-07-14 14:03:18.900895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:41.108 [2024-07-14 14:03:19.047431] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.108 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:41.365 Malloc1 00:27:41.365 [2024-07-14 14:03:19.122875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.365 Malloc2 00:27:41.365 Malloc3 00:27:41.365 Malloc4 00:27:41.365 Malloc5 00:27:41.365 Malloc6 00:27:41.623 Malloc7 00:27:41.623 Malloc8 00:27:41.623 Malloc9 00:27:41.623 Malloc10 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1539023 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1539023 /var/tmp/bdevperf.sock 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1539023 ']' 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:41.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.623 { 00:27:41.623 "params": { 00:27:41.623 "name": "Nvme$subsystem", 00:27:41.623 "trtype": "$TEST_TRANSPORT", 00:27:41.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.623 "adrfam": "ipv4", 00:27:41.623 "trsvcid": "$NVMF_PORT", 00:27:41.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.623 "hdgst": ${hdgst:-false}, 00:27:41.623 "ddgst": ${ddgst:-false} 00:27:41.623 }, 00:27:41.623 "method": "bdev_nvme_attach_controller" 00:27:41.623 } 00:27:41.623 EOF 00:27:41.623 )") 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.623 { 00:27:41.623 "params": { 00:27:41.623 "name": "Nvme$subsystem", 00:27:41.623 "trtype": "$TEST_TRANSPORT", 00:27:41.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.623 "adrfam": "ipv4", 00:27:41.623 "trsvcid": "$NVMF_PORT", 00:27:41.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.623 "hdgst": ${hdgst:-false}, 00:27:41.623 "ddgst": ${ddgst:-false} 00:27:41.623 }, 00:27:41.623 "method": "bdev_nvme_attach_controller" 00:27:41.623 } 00:27:41.623 EOF 00:27:41.623 )") 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.623 { 00:27:41.623 "params": { 00:27:41.623 "name": "Nvme$subsystem", 00:27:41.623 "trtype": "$TEST_TRANSPORT", 00:27:41.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.623 "adrfam": "ipv4", 00:27:41.623 "trsvcid": "$NVMF_PORT", 00:27:41.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.623 "hdgst": ${hdgst:-false}, 00:27:41.623 "ddgst": ${ddgst:-false} 00:27:41.623 }, 00:27:41.623 "method": "bdev_nvme_attach_controller" 00:27:41.623 } 00:27:41.623 EOF 00:27:41.623 )") 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.623 { 00:27:41.623 "params": { 00:27:41.623 "name": "Nvme$subsystem", 00:27:41.623 "trtype": "$TEST_TRANSPORT", 00:27:41.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.623 "adrfam": "ipv4", 00:27:41.623 "trsvcid": "$NVMF_PORT", 00:27:41.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.623 "hdgst": ${hdgst:-false}, 00:27:41.623 "ddgst": ${ddgst:-false} 00:27:41.623 }, 00:27:41.623 "method": "bdev_nvme_attach_controller" 00:27:41.623 } 00:27:41.623 EOF 00:27:41.623 )") 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.623 { 00:27:41.623 "params": { 00:27:41.623 "name": "Nvme$subsystem", 00:27:41.623 "trtype": "$TEST_TRANSPORT", 00:27:41.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.623 "adrfam": "ipv4", 00:27:41.623 "trsvcid": "$NVMF_PORT", 00:27:41.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.623 "hdgst": ${hdgst:-false}, 00:27:41.623 "ddgst": ${ddgst:-false} 00:27:41.623 }, 00:27:41.623 "method": "bdev_nvme_attach_controller" 00:27:41.623 } 00:27:41.623 EOF 00:27:41.623 )") 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.623 { 00:27:41.623 "params": { 00:27:41.623 "name": "Nvme$subsystem", 00:27:41.623 "trtype": "$TEST_TRANSPORT", 00:27:41.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.623 "adrfam": "ipv4", 00:27:41.623 "trsvcid": "$NVMF_PORT", 00:27:41.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.623 "hdgst": ${hdgst:-false}, 00:27:41.623 "ddgst": ${ddgst:-false} 00:27:41.623 }, 00:27:41.623 "method": "bdev_nvme_attach_controller" 00:27:41.623 } 00:27:41.623 EOF 00:27:41.623 )") 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.623 { 00:27:41.623 "params": { 00:27:41.623 "name": "Nvme$subsystem", 00:27:41.623 "trtype": "$TEST_TRANSPORT", 00:27:41.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.623 "adrfam": "ipv4", 00:27:41.623 "trsvcid": "$NVMF_PORT", 00:27:41.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.623 "hdgst": ${hdgst:-false}, 00:27:41.623 "ddgst": ${ddgst:-false} 00:27:41.623 }, 00:27:41.623 "method": "bdev_nvme_attach_controller" 00:27:41.623 } 00:27:41.623 EOF 00:27:41.623 )") 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.623 { 00:27:41.623 "params": { 00:27:41.623 "name": "Nvme$subsystem", 00:27:41.623 "trtype": "$TEST_TRANSPORT", 00:27:41.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.623 "adrfam": "ipv4", 00:27:41.623 "trsvcid": "$NVMF_PORT", 00:27:41.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.623 "hdgst": ${hdgst:-false}, 00:27:41.623 "ddgst": ${ddgst:-false} 00:27:41.623 }, 00:27:41.623 "method": "bdev_nvme_attach_controller" 00:27:41.623 } 00:27:41.623 EOF 00:27:41.623 )") 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.623 { 00:27:41.623 "params": { 00:27:41.623 "name": "Nvme$subsystem", 00:27:41.623 "trtype": "$TEST_TRANSPORT", 00:27:41.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.623 "adrfam": "ipv4", 00:27:41.623 "trsvcid": "$NVMF_PORT", 00:27:41.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.623 "hdgst": ${hdgst:-false}, 00:27:41.623 "ddgst": ${ddgst:-false} 00:27:41.623 }, 00:27:41.623 "method": "bdev_nvme_attach_controller" 00:27:41.623 } 00:27:41.623 EOF 00:27:41.623 )") 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.623 { 00:27:41.623 "params": { 00:27:41.623 "name": "Nvme$subsystem", 00:27:41.623 "trtype": "$TEST_TRANSPORT", 00:27:41.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.623 "adrfam": "ipv4", 00:27:41.623 "trsvcid": "$NVMF_PORT", 00:27:41.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.623 "hdgst": ${hdgst:-false}, 00:27:41.623 "ddgst": ${ddgst:-false} 00:27:41.623 }, 00:27:41.623 "method": "bdev_nvme_attach_controller" 00:27:41.623 } 00:27:41.623 EOF 00:27:41.623 )") 00:27:41.623 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.881 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:41.881 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:41.881 14:03:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:41.881 "params": { 00:27:41.881 "name": "Nvme1", 00:27:41.881 "trtype": "tcp", 00:27:41.881 "traddr": "10.0.0.2", 00:27:41.881 "adrfam": "ipv4", 00:27:41.881 "trsvcid": "4420", 00:27:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:41.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:41.881 "hdgst": false, 00:27:41.881 "ddgst": false 00:27:41.881 }, 00:27:41.881 "method": "bdev_nvme_attach_controller" 00:27:41.881 },{ 00:27:41.881 "params": { 00:27:41.881 "name": "Nvme2", 00:27:41.881 "trtype": "tcp", 00:27:41.881 "traddr": "10.0.0.2", 00:27:41.881 "adrfam": "ipv4", 00:27:41.881 "trsvcid": "4420", 00:27:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:41.881 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:41.881 "hdgst": false, 00:27:41.881 "ddgst": false 00:27:41.881 }, 00:27:41.881 "method": "bdev_nvme_attach_controller" 00:27:41.881 },{ 00:27:41.881 "params": { 00:27:41.881 "name": "Nvme3", 00:27:41.881 "trtype": "tcp", 00:27:41.881 "traddr": "10.0.0.2", 00:27:41.881 "adrfam": "ipv4", 00:27:41.881 "trsvcid": "4420", 00:27:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:41.881 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:41.881 "hdgst": false, 00:27:41.881 "ddgst": false 00:27:41.881 }, 00:27:41.881 "method": "bdev_nvme_attach_controller" 00:27:41.881 },{ 00:27:41.881 "params": { 00:27:41.881 "name": "Nvme4", 00:27:41.881 "trtype": "tcp", 00:27:41.881 "traddr": "10.0.0.2", 00:27:41.881 "adrfam": "ipv4", 00:27:41.881 "trsvcid": "4420", 00:27:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:41.881 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:41.881 "hdgst": false, 00:27:41.881 "ddgst": false 00:27:41.881 }, 00:27:41.881 "method": "bdev_nvme_attach_controller" 00:27:41.881 },{ 00:27:41.881 "params": { 00:27:41.881 "name": "Nvme5", 00:27:41.881 "trtype": "tcp", 00:27:41.881 "traddr": "10.0.0.2", 00:27:41.881 "adrfam": "ipv4", 00:27:41.881 "trsvcid": "4420", 00:27:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:41.881 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:41.881 "hdgst": false, 00:27:41.881 "ddgst": false 00:27:41.881 }, 00:27:41.881 "method": "bdev_nvme_attach_controller" 00:27:41.881 },{ 00:27:41.881 "params": { 00:27:41.881 "name": "Nvme6", 00:27:41.881 "trtype": "tcp", 00:27:41.881 "traddr": "10.0.0.2", 00:27:41.881 "adrfam": "ipv4", 00:27:41.881 "trsvcid": "4420", 00:27:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:41.881 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:41.881 "hdgst": false, 00:27:41.881 "ddgst": false 00:27:41.881 }, 00:27:41.881 "method": "bdev_nvme_attach_controller" 00:27:41.881 },{ 00:27:41.881 "params": { 00:27:41.881 "name": "Nvme7", 00:27:41.881 "trtype": "tcp", 00:27:41.881 "traddr": "10.0.0.2", 00:27:41.881 "adrfam": "ipv4", 00:27:41.881 "trsvcid": "4420", 00:27:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:41.881 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:41.881 "hdgst": false, 00:27:41.881 "ddgst": false 00:27:41.881 }, 00:27:41.881 "method": "bdev_nvme_attach_controller" 00:27:41.881 },{ 00:27:41.881 "params": { 00:27:41.881 "name": "Nvme8", 00:27:41.881 "trtype": "tcp", 00:27:41.881 "traddr": "10.0.0.2", 00:27:41.881 "adrfam": "ipv4", 00:27:41.881 "trsvcid": "4420", 00:27:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:41.881 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:41.881 "hdgst": false, 00:27:41.881 "ddgst": false 00:27:41.881 }, 00:27:41.881 "method": "bdev_nvme_attach_controller" 00:27:41.881 },{ 00:27:41.881 "params": { 00:27:41.881 "name": "Nvme9", 00:27:41.881 "trtype": "tcp", 00:27:41.881 "traddr": "10.0.0.2", 00:27:41.881 "adrfam": "ipv4", 00:27:41.881 "trsvcid": "4420", 00:27:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:41.881 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:41.881 "hdgst": false, 00:27:41.881 "ddgst": false 00:27:41.881 }, 00:27:41.881 "method": "bdev_nvme_attach_controller" 00:27:41.881 },{ 00:27:41.881 "params": { 00:27:41.881 "name": "Nvme10", 00:27:41.881 "trtype": "tcp", 00:27:41.881 "traddr": "10.0.0.2", 00:27:41.881 "adrfam": "ipv4", 00:27:41.881 "trsvcid": "4420", 00:27:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:41.881 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:41.881 "hdgst": false, 00:27:41.881 "ddgst": false 00:27:41.881 }, 00:27:41.881 "method": "bdev_nvme_attach_controller" 00:27:41.881 }' 00:27:41.881 [2024-07-14 14:03:19.615490] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:41.881 [2024-07-14 14:03:19.615566] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:41.881 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.881 [2024-07-14 14:03:19.681683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.881 [2024-07-14 14:03:19.769303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.810 14:03:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:43.810 14:03:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:43.810 14:03:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:43.810 14:03:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.810 14:03:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.810 14:03:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.810 14:03:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1539023 00:27:43.810 14:03:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:43.810 14:03:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:44.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1539023 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1538844 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.743 { 00:27:44.743 "params": { 00:27:44.743 "name": "Nvme$subsystem", 00:27:44.743 "trtype": "$TEST_TRANSPORT", 00:27:44.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.743 "adrfam": "ipv4", 00:27:44.743 "trsvcid": "$NVMF_PORT", 00:27:44.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.743 "hdgst": ${hdgst:-false}, 00:27:44.743 "ddgst": ${ddgst:-false} 00:27:44.743 }, 00:27:44.743 "method": "bdev_nvme_attach_controller" 00:27:44.743 } 00:27:44.743 EOF 00:27:44.743 )") 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.743 { 00:27:44.743 "params": { 00:27:44.743 "name": "Nvme$subsystem", 00:27:44.743 "trtype": "$TEST_TRANSPORT", 00:27:44.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.743 "adrfam": "ipv4", 00:27:44.743 "trsvcid": "$NVMF_PORT", 00:27:44.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.743 "hdgst": ${hdgst:-false}, 00:27:44.743 "ddgst": ${ddgst:-false} 00:27:44.743 }, 00:27:44.743 "method": "bdev_nvme_attach_controller" 00:27:44.743 } 00:27:44.743 EOF 00:27:44.743 )") 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.743 { 00:27:44.743 "params": { 00:27:44.743 "name": "Nvme$subsystem", 00:27:44.743 "trtype": "$TEST_TRANSPORT", 00:27:44.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.743 "adrfam": "ipv4", 00:27:44.743 "trsvcid": "$NVMF_PORT", 00:27:44.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.743 "hdgst": ${hdgst:-false}, 00:27:44.743 "ddgst": ${ddgst:-false} 00:27:44.743 }, 00:27:44.743 "method": "bdev_nvme_attach_controller" 00:27:44.743 } 00:27:44.743 EOF 00:27:44.743 )") 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.743 { 00:27:44.743 "params": { 00:27:44.743 "name": "Nvme$subsystem", 00:27:44.743 "trtype": "$TEST_TRANSPORT", 00:27:44.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.743 "adrfam": "ipv4", 00:27:44.743 "trsvcid": "$NVMF_PORT", 00:27:44.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.743 "hdgst": ${hdgst:-false}, 00:27:44.743 "ddgst": ${ddgst:-false} 00:27:44.743 }, 00:27:44.743 "method": "bdev_nvme_attach_controller" 00:27:44.743 } 00:27:44.743 EOF 00:27:44.743 )") 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.743 { 00:27:44.743 "params": { 00:27:44.743 "name": "Nvme$subsystem", 00:27:44.743 "trtype": "$TEST_TRANSPORT", 00:27:44.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.743 "adrfam": "ipv4", 00:27:44.743 "trsvcid": "$NVMF_PORT", 00:27:44.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.743 "hdgst": ${hdgst:-false}, 00:27:44.743 "ddgst": ${ddgst:-false} 00:27:44.743 }, 00:27:44.743 "method": "bdev_nvme_attach_controller" 00:27:44.743 } 00:27:44.743 EOF 00:27:44.743 )") 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.743 { 00:27:44.743 "params": { 00:27:44.743 "name": "Nvme$subsystem", 00:27:44.743 "trtype": "$TEST_TRANSPORT", 00:27:44.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.743 "adrfam": "ipv4", 00:27:44.743 "trsvcid": "$NVMF_PORT", 00:27:44.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.743 "hdgst": ${hdgst:-false}, 00:27:44.743 "ddgst": ${ddgst:-false} 00:27:44.743 }, 00:27:44.743 "method": "bdev_nvme_attach_controller" 00:27:44.743 } 00:27:44.743 EOF 00:27:44.743 )") 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:44.743 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.744 { 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme$subsystem", 00:27:44.744 "trtype": "$TEST_TRANSPORT", 00:27:44.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "$NVMF_PORT", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.744 "hdgst": ${hdgst:-false}, 00:27:44.744 "ddgst": ${ddgst:-false} 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 } 00:27:44.744 EOF 00:27:44.744 )") 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.744 { 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme$subsystem", 00:27:44.744 "trtype": "$TEST_TRANSPORT", 00:27:44.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "$NVMF_PORT", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.744 "hdgst": ${hdgst:-false}, 00:27:44.744 "ddgst": ${ddgst:-false} 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 } 00:27:44.744 EOF 00:27:44.744 )") 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.744 { 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme$subsystem", 00:27:44.744 "trtype": "$TEST_TRANSPORT", 00:27:44.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "$NVMF_PORT", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.744 "hdgst": ${hdgst:-false}, 00:27:44.744 "ddgst": ${ddgst:-false} 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 } 00:27:44.744 EOF 00:27:44.744 )") 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:44.744 { 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme$subsystem", 00:27:44.744 "trtype": "$TEST_TRANSPORT", 00:27:44.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "$NVMF_PORT", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.744 "hdgst": ${hdgst:-false}, 00:27:44.744 "ddgst": ${ddgst:-false} 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 } 00:27:44.744 EOF 00:27:44.744 )") 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:44.744 14:03:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme1", 00:27:44.744 "trtype": "tcp", 00:27:44.744 "traddr": "10.0.0.2", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "4420", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:44.744 "hdgst": false, 00:27:44.744 "ddgst": false 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 },{ 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme2", 00:27:44.744 "trtype": "tcp", 00:27:44.744 "traddr": "10.0.0.2", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "4420", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:44.744 "hdgst": false, 00:27:44.744 "ddgst": false 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 },{ 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme3", 00:27:44.744 "trtype": "tcp", 00:27:44.744 "traddr": "10.0.0.2", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "4420", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:44.744 "hdgst": false, 00:27:44.744 "ddgst": false 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 },{ 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme4", 00:27:44.744 "trtype": "tcp", 00:27:44.744 "traddr": "10.0.0.2", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "4420", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:44.744 "hdgst": false, 00:27:44.744 "ddgst": false 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 },{ 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme5", 00:27:44.744 "trtype": "tcp", 00:27:44.744 "traddr": "10.0.0.2", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "4420", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:44.744 "hdgst": false, 00:27:44.744 "ddgst": false 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 },{ 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme6", 00:27:44.744 "trtype": "tcp", 00:27:44.744 "traddr": "10.0.0.2", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "4420", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:44.744 "hdgst": false, 00:27:44.744 "ddgst": false 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 },{ 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme7", 00:27:44.744 "trtype": "tcp", 00:27:44.744 "traddr": "10.0.0.2", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "4420", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:44.744 "hdgst": false, 00:27:44.744 "ddgst": false 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 },{ 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme8", 00:27:44.744 "trtype": "tcp", 00:27:44.744 "traddr": "10.0.0.2", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "4420", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:44.744 "hdgst": false, 00:27:44.744 "ddgst": false 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 },{ 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme9", 00:27:44.744 "trtype": "tcp", 00:27:44.744 "traddr": "10.0.0.2", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "4420", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:44.744 "hdgst": false, 00:27:44.744 "ddgst": false 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 },{ 00:27:44.744 "params": { 00:27:44.744 "name": "Nvme10", 00:27:44.744 "trtype": "tcp", 00:27:44.744 "traddr": "10.0.0.2", 00:27:44.744 "adrfam": "ipv4", 00:27:44.744 "trsvcid": "4420", 00:27:44.744 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:44.744 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:44.744 "hdgst": false, 00:27:44.744 "ddgst": false 00:27:44.744 }, 00:27:44.744 "method": "bdev_nvme_attach_controller" 00:27:44.744 }' 00:27:44.744 [2024-07-14 14:03:22.621847] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:44.744 [2024-07-14 14:03:22.621975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539441 ] 00:27:44.744 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.744 [2024-07-14 14:03:22.686208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.029 [2024-07-14 14:03:22.786618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.395 Running I/O for 1 seconds... 00:27:47.764 00:27:47.764 Latency(us) 00:27:47.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.765 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.765 Verification LBA range: start 0x0 length 0x400 00:27:47.765 Nvme1n1 : 1.13 226.73 14.17 0.00 0.00 279489.04 20486.07 251658.24 00:27:47.765 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.765 Verification LBA range: start 0x0 length 0x400 00:27:47.765 Nvme2n1 : 1.09 235.38 14.71 0.00 0.00 263777.66 18641.35 246997.90 00:27:47.765 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.765 Verification LBA range: start 0x0 length 0x400 00:27:47.765 Nvme3n1 : 1.09 233.89 14.62 0.00 0.00 261682.44 19418.07 250104.79 00:27:47.765 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.765 Verification LBA range: start 0x0 length 0x400 00:27:47.765 Nvme4n1 : 1.17 272.70 17.04 0.00 0.00 221322.51 15825.73 254765.13 00:27:47.765 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.765 Verification LBA range: start 0x0 length 0x400 00:27:47.765 Nvme5n1 : 1.14 224.73 14.05 0.00 0.00 263398.02 21359.88 253211.69 00:27:47.765 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.765 Verification LBA range: start 0x0 length 0x400 00:27:47.765 Nvme6n1 : 1.14 224.19 14.01 0.00 0.00 259773.82 21748.24 250104.79 00:27:47.765 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.765 Verification LBA range: start 0x0 length 0x400 00:27:47.765 Nvme7n1 : 1.13 226.20 14.14 0.00 0.00 252832.62 19126.80 251658.24 00:27:47.765 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.765 Verification LBA range: start 0x0 length 0x400 00:27:47.765 Nvme8n1 : 1.18 270.99 16.94 0.00 0.00 208551.94 18252.99 250104.79 00:27:47.765 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.765 Verification LBA range: start 0x0 length 0x400 00:27:47.765 Nvme9n1 : 1.17 218.80 13.68 0.00 0.00 253560.98 21942.42 270299.59 00:27:47.765 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.765 Verification LBA range: start 0x0 length 0x400 00:27:47.765 Nvme10n1 : 1.19 272.89 17.06 0.00 0.00 200256.16 6505.05 273406.48 00:27:47.765 =================================================================================================================== 00:27:47.765 Total : 2406.51 150.41 0.00 0.00 243860.14 6505.05 273406.48 00:27:47.765 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:47.765 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:47.765 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:47.765 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:47.765 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:47.765 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:47.765 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:47.765 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:47.765 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:47.765 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:47.765 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.022 rmmod nvme_tcp 00:27:48.022 rmmod nvme_fabrics 00:27:48.022 rmmod nvme_keyring 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1538844 ']' 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1538844 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 1538844 ']' 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 1538844 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1538844 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1538844' 00:27:48.022 killing process with pid 1538844 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 1538844 00:27:48.022 14:03:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 1538844 00:27:48.588 14:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:48.589 14:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:48.589 14:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:48.589 14:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:48.589 14:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:48.589 14:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.589 14:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.589 14:03:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:50.489 00:27:50.489 real 0m11.907s 00:27:50.489 user 0m34.524s 00:27:50.489 sys 0m3.176s 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.489 ************************************ 00:27:50.489 END TEST nvmf_shutdown_tc1 00:27:50.489 ************************************ 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:50.489 ************************************ 00:27:50.489 START TEST nvmf_shutdown_tc2 00:27:50.489 ************************************ 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.489 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:50.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:50.748 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.748 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:50.749 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:50.749 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:50.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:27:50.749 00:27:50.749 --- 10.0.0.2 ping statistics --- 00:27:50.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.749 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:27:50.749 00:27:50.749 --- 10.0.0.1 ping statistics --- 00:27:50.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.749 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1540210 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1540210 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1540210 ']' 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:50.749 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.749 [2024-07-14 14:03:28.671820] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:50.749 [2024-07-14 14:03:28.671912] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.749 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.008 [2024-07-14 14:03:28.737480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.008 [2024-07-14 14:03:28.828665] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.008 [2024-07-14 14:03:28.828730] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.008 [2024-07-14 14:03:28.828745] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.008 [2024-07-14 14:03:28.828757] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.008 [2024-07-14 14:03:28.828767] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.008 [2024-07-14 14:03:28.828901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.008 [2024-07-14 14:03:28.829181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.008 [2024-07-14 14:03:28.829239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:51.008 [2024-07-14 14:03:28.829239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.008 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:51.008 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:51.008 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:51.008 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.008 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.008 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.008 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:51.008 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.008 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.008 [2024-07-14 14:03:28.985792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.268 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.268 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:51.268 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:51.268 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:51.268 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.268 14:03:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:51.268 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:51.269 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.269 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.269 Malloc1 00:27:51.269 [2024-07-14 14:03:29.075170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.269 Malloc2 00:27:51.269 Malloc3 00:27:51.269 Malloc4 00:27:51.269 Malloc5 00:27:51.528 Malloc6 00:27:51.528 Malloc7 00:27:51.528 Malloc8 00:27:51.528 Malloc9 00:27:51.528 Malloc10 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1540389 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1540389 /var/tmp/bdevperf.sock 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1540389 ']' 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:51.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.786 { 00:27:51.786 "params": { 00:27:51.786 "name": "Nvme$subsystem", 00:27:51.786 "trtype": "$TEST_TRANSPORT", 00:27:51.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.786 "adrfam": "ipv4", 00:27:51.786 "trsvcid": "$NVMF_PORT", 00:27:51.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.786 "hdgst": ${hdgst:-false}, 00:27:51.786 "ddgst": ${ddgst:-false} 00:27:51.786 }, 00:27:51.786 "method": "bdev_nvme_attach_controller" 00:27:51.786 } 00:27:51.786 EOF 00:27:51.786 )") 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.786 { 00:27:51.786 "params": { 00:27:51.786 "name": "Nvme$subsystem", 00:27:51.786 "trtype": "$TEST_TRANSPORT", 00:27:51.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.786 "adrfam": "ipv4", 00:27:51.786 "trsvcid": "$NVMF_PORT", 00:27:51.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.786 "hdgst": ${hdgst:-false}, 00:27:51.786 "ddgst": ${ddgst:-false} 00:27:51.786 }, 00:27:51.786 "method": "bdev_nvme_attach_controller" 00:27:51.786 } 00:27:51.786 EOF 00:27:51.786 )") 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.786 { 00:27:51.786 "params": { 00:27:51.786 "name": "Nvme$subsystem", 00:27:51.786 "trtype": "$TEST_TRANSPORT", 00:27:51.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.786 "adrfam": "ipv4", 00:27:51.786 "trsvcid": "$NVMF_PORT", 00:27:51.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.786 "hdgst": ${hdgst:-false}, 00:27:51.786 "ddgst": ${ddgst:-false} 00:27:51.786 }, 00:27:51.786 "method": "bdev_nvme_attach_controller" 00:27:51.786 } 00:27:51.786 EOF 00:27:51.786 )") 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.786 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.786 { 00:27:51.786 "params": { 00:27:51.786 "name": "Nvme$subsystem", 00:27:51.786 "trtype": "$TEST_TRANSPORT", 00:27:51.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.786 "adrfam": "ipv4", 00:27:51.786 "trsvcid": "$NVMF_PORT", 00:27:51.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.787 "hdgst": ${hdgst:-false}, 00:27:51.787 "ddgst": ${ddgst:-false} 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 } 00:27:51.787 EOF 00:27:51.787 )") 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.787 { 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme$subsystem", 00:27:51.787 "trtype": "$TEST_TRANSPORT", 00:27:51.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "$NVMF_PORT", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.787 "hdgst": ${hdgst:-false}, 00:27:51.787 "ddgst": ${ddgst:-false} 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 } 00:27:51.787 EOF 00:27:51.787 )") 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.787 { 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme$subsystem", 00:27:51.787 "trtype": "$TEST_TRANSPORT", 00:27:51.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "$NVMF_PORT", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.787 "hdgst": ${hdgst:-false}, 00:27:51.787 "ddgst": ${ddgst:-false} 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 } 00:27:51.787 EOF 00:27:51.787 )") 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.787 { 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme$subsystem", 00:27:51.787 "trtype": "$TEST_TRANSPORT", 00:27:51.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "$NVMF_PORT", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.787 "hdgst": ${hdgst:-false}, 00:27:51.787 "ddgst": ${ddgst:-false} 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 } 00:27:51.787 EOF 00:27:51.787 )") 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.787 { 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme$subsystem", 00:27:51.787 "trtype": "$TEST_TRANSPORT", 00:27:51.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "$NVMF_PORT", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.787 "hdgst": ${hdgst:-false}, 00:27:51.787 "ddgst": ${ddgst:-false} 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 } 00:27:51.787 EOF 00:27:51.787 )") 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.787 { 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme$subsystem", 00:27:51.787 "trtype": "$TEST_TRANSPORT", 00:27:51.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "$NVMF_PORT", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.787 "hdgst": ${hdgst:-false}, 00:27:51.787 "ddgst": ${ddgst:-false} 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 } 00:27:51.787 EOF 00:27:51.787 )") 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.787 { 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme$subsystem", 00:27:51.787 "trtype": "$TEST_TRANSPORT", 00:27:51.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "$NVMF_PORT", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.787 "hdgst": ${hdgst:-false}, 00:27:51.787 "ddgst": ${ddgst:-false} 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 } 00:27:51.787 EOF 00:27:51.787 )") 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:51.787 14:03:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme1", 00:27:51.787 "trtype": "tcp", 00:27:51.787 "traddr": "10.0.0.2", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "4420", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:51.787 "hdgst": false, 00:27:51.787 "ddgst": false 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 },{ 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme2", 00:27:51.787 "trtype": "tcp", 00:27:51.787 "traddr": "10.0.0.2", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "4420", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:51.787 "hdgst": false, 00:27:51.787 "ddgst": false 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 },{ 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme3", 00:27:51.787 "trtype": "tcp", 00:27:51.787 "traddr": "10.0.0.2", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "4420", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:51.787 "hdgst": false, 00:27:51.787 "ddgst": false 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 },{ 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme4", 00:27:51.787 "trtype": "tcp", 00:27:51.787 "traddr": "10.0.0.2", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "4420", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:51.787 "hdgst": false, 00:27:51.787 "ddgst": false 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 },{ 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme5", 00:27:51.787 "trtype": "tcp", 00:27:51.787 "traddr": "10.0.0.2", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "4420", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:51.787 "hdgst": false, 00:27:51.787 "ddgst": false 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 },{ 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme6", 00:27:51.787 "trtype": "tcp", 00:27:51.787 "traddr": "10.0.0.2", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "4420", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:51.787 "hdgst": false, 00:27:51.787 "ddgst": false 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 },{ 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme7", 00:27:51.787 "trtype": "tcp", 00:27:51.787 "traddr": "10.0.0.2", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "4420", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:51.787 "hdgst": false, 00:27:51.787 "ddgst": false 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 },{ 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme8", 00:27:51.787 "trtype": "tcp", 00:27:51.787 "traddr": "10.0.0.2", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "4420", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:51.787 "hdgst": false, 00:27:51.787 "ddgst": false 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 },{ 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme9", 00:27:51.787 "trtype": "tcp", 00:27:51.787 "traddr": "10.0.0.2", 00:27:51.787 "adrfam": "ipv4", 00:27:51.787 "trsvcid": "4420", 00:27:51.787 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:51.787 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:51.787 "hdgst": false, 00:27:51.787 "ddgst": false 00:27:51.787 }, 00:27:51.787 "method": "bdev_nvme_attach_controller" 00:27:51.787 },{ 00:27:51.787 "params": { 00:27:51.787 "name": "Nvme10", 00:27:51.787 "trtype": "tcp", 00:27:51.787 "traddr": "10.0.0.2", 00:27:51.787 "adrfam": "ipv4", 00:27:51.788 "trsvcid": "4420", 00:27:51.788 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:51.788 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:51.788 "hdgst": false, 00:27:51.788 "ddgst": false 00:27:51.788 }, 00:27:51.788 "method": "bdev_nvme_attach_controller" 00:27:51.788 }' 00:27:51.788 [2024-07-14 14:03:29.592756] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:51.788 [2024-07-14 14:03:29.592828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540389 ] 00:27:51.788 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.788 [2024-07-14 14:03:29.656421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.788 [2024-07-14 14:03:29.742564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.161 Running I/O for 10 seconds... 00:27:53.726 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:53.727 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1540389 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1540389 ']' 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1540389 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1540389 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1540389' 00:27:53.985 killing process with pid 1540389 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1540389 00:27:53.985 14:03:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1540389 00:27:54.243 Received shutdown signal, test time was about 0.985391 seconds 00:27:54.243 00:27:54.243 Latency(us) 00:27:54.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.243 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.243 Verification LBA range: start 0x0 length 0x400 00:27:54.243 Nvme1n1 : 0.98 260.02 16.25 0.00 0.00 233718.90 23981.32 234570.33 00:27:54.243 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.243 Verification LBA range: start 0x0 length 0x400 00:27:54.243 Nvme2n1 : 0.94 204.87 12.80 0.00 0.00 302608.43 22039.51 295154.73 00:27:54.243 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.243 Verification LBA range: start 0x0 length 0x400 00:27:54.243 Nvme3n1 : 0.94 272.17 17.01 0.00 0.00 223180.23 17087.91 256318.58 00:27:54.243 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.243 Verification LBA range: start 0x0 length 0x400 00:27:54.243 Nvme4n1 : 0.93 278.94 17.43 0.00 0.00 212228.72 5898.24 239230.67 00:27:54.243 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.243 Verification LBA range: start 0x0 length 0x400 00:27:54.243 Nvme5n1 : 0.92 208.99 13.06 0.00 0.00 278009.68 21748.24 257872.02 00:27:54.243 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.243 Verification LBA range: start 0x0 length 0x400 00:27:54.243 Nvme6n1 : 0.92 207.94 13.00 0.00 0.00 273343.53 20194.80 259425.47 00:27:54.243 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.243 Verification LBA range: start 0x0 length 0x400 00:27:54.243 Nvme7n1 : 0.95 270.51 16.91 0.00 0.00 205641.39 18835.53 265639.25 00:27:54.243 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.243 Verification LBA range: start 0x0 length 0x400 00:27:54.243 Nvme8n1 : 0.91 211.95 13.25 0.00 0.00 255788.88 18058.81 256318.58 00:27:54.243 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.243 Verification LBA range: start 0x0 length 0x400 00:27:54.243 Nvme9n1 : 0.93 206.26 12.89 0.00 0.00 258192.12 23204.60 267192.70 00:27:54.243 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.243 Verification LBA range: start 0x0 length 0x400 00:27:54.243 Nvme10n1 : 0.90 214.24 13.39 0.00 0.00 240609.15 28738.75 248551.35 00:27:54.243 =================================================================================================================== 00:27:54.243 Total : 2335.88 145.99 0.00 0.00 244785.22 5898.24 295154.73 00:27:54.501 14:03:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:55.432 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1540210 00:27:55.432 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:55.432 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:55.433 rmmod nvme_tcp 00:27:55.433 rmmod nvme_fabrics 00:27:55.433 rmmod nvme_keyring 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1540210 ']' 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1540210 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1540210 ']' 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1540210 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1540210 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1540210' 00:27:55.433 killing process with pid 1540210 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1540210 00:27:55.433 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1540210 00:27:56.021 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:56.022 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:56.022 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:56.022 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:56.022 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:56.022 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.022 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.022 14:03:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.923 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:57.923 00:27:57.923 real 0m7.430s 00:27:57.923 user 0m22.075s 00:27:57.923 sys 0m1.481s 00:27:57.923 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:57.923 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.923 ************************************ 00:27:57.923 END TEST nvmf_shutdown_tc2 00:27:57.923 ************************************ 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:58.182 ************************************ 00:27:58.182 START TEST nvmf_shutdown_tc3 00:27:58.182 ************************************ 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.182 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:58.183 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:58.183 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:58.183 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:58.183 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.183 14:03:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:27:58.183 00:27:58.183 --- 10.0.0.2 ping statistics --- 00:27:58.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.183 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:27:58.183 00:27:58.183 --- 10.0.0.1 ping statistics --- 00:27:58.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.183 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1541296 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1541296 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1541296 ']' 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:58.183 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.442 [2024-07-14 14:03:36.190957] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:58.442 [2024-07-14 14:03:36.191053] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.442 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.442 [2024-07-14 14:03:36.261087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.442 [2024-07-14 14:03:36.350411] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.442 [2024-07-14 14:03:36.350468] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.442 [2024-07-14 14:03:36.350496] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.442 [2024-07-14 14:03:36.350507] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.442 [2024-07-14 14:03:36.350517] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.442 [2024-07-14 14:03:36.350568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.442 [2024-07-14 14:03:36.350794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.442 [2024-07-14 14:03:36.350851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:58.442 [2024-07-14 14:03:36.350854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.700 [2024-07-14 14:03:36.494466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:58.700 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.701 14:03:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.701 Malloc1 00:27:58.701 [2024-07-14 14:03:36.569608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.701 Malloc2 00:27:58.701 Malloc3 00:27:58.959 Malloc4 00:27:58.959 Malloc5 00:27:58.959 Malloc6 00:27:58.959 Malloc7 00:27:58.959 Malloc8 00:27:58.959 Malloc9 00:27:59.218 Malloc10 00:27:59.218 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.218 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:59.218 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.218 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:59.218 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1541363 00:27:59.218 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1541363 /var/tmp/bdevperf.sock 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1541363 ']' 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.219 { 00:27:59.219 "params": { 00:27:59.219 "name": "Nvme$subsystem", 00:27:59.219 "trtype": "$TEST_TRANSPORT", 00:27:59.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.219 "adrfam": "ipv4", 00:27:59.219 "trsvcid": "$NVMF_PORT", 00:27:59.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.219 "hdgst": ${hdgst:-false}, 00:27:59.219 "ddgst": ${ddgst:-false} 00:27:59.219 }, 00:27:59.219 "method": "bdev_nvme_attach_controller" 00:27:59.219 } 00:27:59.219 EOF 00:27:59.219 )") 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.219 { 00:27:59.219 "params": { 00:27:59.219 "name": "Nvme$subsystem", 00:27:59.219 "trtype": "$TEST_TRANSPORT", 00:27:59.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.219 "adrfam": "ipv4", 00:27:59.219 "trsvcid": "$NVMF_PORT", 00:27:59.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.219 "hdgst": ${hdgst:-false}, 00:27:59.219 "ddgst": ${ddgst:-false} 00:27:59.219 }, 00:27:59.219 "method": "bdev_nvme_attach_controller" 00:27:59.219 } 00:27:59.219 EOF 00:27:59.219 )") 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.219 { 00:27:59.219 "params": { 00:27:59.219 "name": "Nvme$subsystem", 00:27:59.219 "trtype": "$TEST_TRANSPORT", 00:27:59.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.219 "adrfam": "ipv4", 00:27:59.219 "trsvcid": "$NVMF_PORT", 00:27:59.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.219 "hdgst": ${hdgst:-false}, 00:27:59.219 "ddgst": ${ddgst:-false} 00:27:59.219 }, 00:27:59.219 "method": "bdev_nvme_attach_controller" 00:27:59.219 } 00:27:59.219 EOF 00:27:59.219 )") 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.219 { 00:27:59.219 "params": { 00:27:59.219 "name": "Nvme$subsystem", 00:27:59.219 "trtype": "$TEST_TRANSPORT", 00:27:59.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.219 "adrfam": "ipv4", 00:27:59.219 "trsvcid": "$NVMF_PORT", 00:27:59.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.219 "hdgst": ${hdgst:-false}, 00:27:59.219 "ddgst": ${ddgst:-false} 00:27:59.219 }, 00:27:59.219 "method": "bdev_nvme_attach_controller" 00:27:59.219 } 00:27:59.219 EOF 00:27:59.219 )") 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.219 { 00:27:59.219 "params": { 00:27:59.219 "name": "Nvme$subsystem", 00:27:59.219 "trtype": "$TEST_TRANSPORT", 00:27:59.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.219 "adrfam": "ipv4", 00:27:59.219 "trsvcid": "$NVMF_PORT", 00:27:59.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.219 "hdgst": ${hdgst:-false}, 00:27:59.219 "ddgst": ${ddgst:-false} 00:27:59.219 }, 00:27:59.219 "method": "bdev_nvme_attach_controller" 00:27:59.219 } 00:27:59.219 EOF 00:27:59.219 )") 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.219 { 00:27:59.219 "params": { 00:27:59.219 "name": "Nvme$subsystem", 00:27:59.219 "trtype": "$TEST_TRANSPORT", 00:27:59.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.219 "adrfam": "ipv4", 00:27:59.219 "trsvcid": "$NVMF_PORT", 00:27:59.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.219 "hdgst": ${hdgst:-false}, 00:27:59.219 "ddgst": ${ddgst:-false} 00:27:59.219 }, 00:27:59.219 "method": "bdev_nvme_attach_controller" 00:27:59.219 } 00:27:59.219 EOF 00:27:59.219 )") 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.219 { 00:27:59.219 "params": { 00:27:59.219 "name": "Nvme$subsystem", 00:27:59.219 "trtype": "$TEST_TRANSPORT", 00:27:59.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.219 "adrfam": "ipv4", 00:27:59.219 "trsvcid": "$NVMF_PORT", 00:27:59.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.219 "hdgst": ${hdgst:-false}, 00:27:59.219 "ddgst": ${ddgst:-false} 00:27:59.219 }, 00:27:59.219 "method": "bdev_nvme_attach_controller" 00:27:59.219 } 00:27:59.219 EOF 00:27:59.219 )") 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.219 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.219 { 00:27:59.219 "params": { 00:27:59.220 "name": "Nvme$subsystem", 00:27:59.220 "trtype": "$TEST_TRANSPORT", 00:27:59.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "$NVMF_PORT", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.220 "hdgst": ${hdgst:-false}, 00:27:59.220 "ddgst": ${ddgst:-false} 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 } 00:27:59.220 EOF 00:27:59.220 )") 00:27:59.220 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:59.220 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.220 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.220 { 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme$subsystem", 00:27:59.220 "trtype": "$TEST_TRANSPORT", 00:27:59.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "$NVMF_PORT", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.220 "hdgst": ${hdgst:-false}, 00:27:59.220 "ddgst": ${ddgst:-false} 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 } 00:27:59.220 EOF 00:27:59.220 )") 00:27:59.220 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:59.220 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:59.220 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:59.220 { 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme$subsystem", 00:27:59.220 "trtype": "$TEST_TRANSPORT", 00:27:59.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "$NVMF_PORT", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.220 "hdgst": ${hdgst:-false}, 00:27:59.220 "ddgst": ${ddgst:-false} 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 } 00:27:59.220 EOF 00:27:59.220 )") 00:27:59.220 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:59.220 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:59.220 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:59.220 14:03:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme1", 00:27:59.220 "trtype": "tcp", 00:27:59.220 "traddr": "10.0.0.2", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "4420", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:59.220 "hdgst": false, 00:27:59.220 "ddgst": false 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 },{ 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme2", 00:27:59.220 "trtype": "tcp", 00:27:59.220 "traddr": "10.0.0.2", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "4420", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:59.220 "hdgst": false, 00:27:59.220 "ddgst": false 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 },{ 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme3", 00:27:59.220 "trtype": "tcp", 00:27:59.220 "traddr": "10.0.0.2", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "4420", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:59.220 "hdgst": false, 00:27:59.220 "ddgst": false 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 },{ 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme4", 00:27:59.220 "trtype": "tcp", 00:27:59.220 "traddr": "10.0.0.2", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "4420", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:59.220 "hdgst": false, 00:27:59.220 "ddgst": false 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 },{ 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme5", 00:27:59.220 "trtype": "tcp", 00:27:59.220 "traddr": "10.0.0.2", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "4420", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:59.220 "hdgst": false, 00:27:59.220 "ddgst": false 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 },{ 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme6", 00:27:59.220 "trtype": "tcp", 00:27:59.220 "traddr": "10.0.0.2", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "4420", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:59.220 "hdgst": false, 00:27:59.220 "ddgst": false 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 },{ 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme7", 00:27:59.220 "trtype": "tcp", 00:27:59.220 "traddr": "10.0.0.2", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "4420", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:59.220 "hdgst": false, 00:27:59.220 "ddgst": false 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 },{ 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme8", 00:27:59.220 "trtype": "tcp", 00:27:59.220 "traddr": "10.0.0.2", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "4420", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:59.220 "hdgst": false, 00:27:59.220 "ddgst": false 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 },{ 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme9", 00:27:59.220 "trtype": "tcp", 00:27:59.220 "traddr": "10.0.0.2", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "4420", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:59.220 "hdgst": false, 00:27:59.220 "ddgst": false 00:27:59.220 }, 00:27:59.220 "method": "bdev_nvme_attach_controller" 00:27:59.220 },{ 00:27:59.220 "params": { 00:27:59.220 "name": "Nvme10", 00:27:59.220 "trtype": "tcp", 00:27:59.220 "traddr": "10.0.0.2", 00:27:59.220 "adrfam": "ipv4", 00:27:59.220 "trsvcid": "4420", 00:27:59.220 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:59.220 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:59.221 "hdgst": false, 00:27:59.221 "ddgst": false 00:27:59.221 }, 00:27:59.221 "method": "bdev_nvme_attach_controller" 00:27:59.221 }' 00:27:59.221 [2024-07-14 14:03:37.063143] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:59.221 [2024-07-14 14:03:37.063260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541363 ] 00:27:59.221 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.221 [2024-07-14 14:03:37.128507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.479 [2024-07-14 14:03:37.217620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.895 Running I/O for 10 seconds... 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:01.154 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:01.412 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:01.412 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:01.412 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:01.412 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:01.412 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.412 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:01.412 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1541296 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 1541296 ']' 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 1541296 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1541296 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1541296' 00:28:01.685 killing process with pid 1541296 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 1541296 00:28:01.685 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 1541296 00:28:01.685 [2024-07-14 14:03:39.427710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.685 [2024-07-14 14:03:39.427798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.685 [2024-07-14 14:03:39.427826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.685 [2024-07-14 14:03:39.427849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.685 [2024-07-14 14:03:39.427863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.685 [2024-07-14 14:03:39.427897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.427917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.427942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.427957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.427969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.427990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.428786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c560 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.430404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2283700 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.686 [2024-07-14 14:03:39.431795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.431992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.432155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ca00 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.433976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.434982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.435469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243cea0 is same with the state(5) to be set 00:28:01.687 [2024-07-14 14:03:39.436939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.436970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.436987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.437783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d360 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.688 [2024-07-14 14:03:39.438965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.438978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.438990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.439420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243d800 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.440994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.441006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.441018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.689 [2024-07-14 14:03:39.441030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.441266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243dcc0 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.442990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.690 [2024-07-14 14:03:39.443932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.691 [2024-07-14 14:03:39.443955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.691 [2024-07-14 14:03:39.443978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.691 [2024-07-14 14:03:39.444000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.691 [2024-07-14 14:03:39.444023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243e160 is same with the state(5) to be set 00:28:01.691 [2024-07-14 14:03:39.448124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.448974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.448990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.449018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.449045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.449073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.449100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.449129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.449157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.449184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.449211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.449239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.449271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.691 [2024-07-14 14:03:39.449300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.691 [2024-07-14 14:03:39.449313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.449984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.692 [2024-07-14 14:03:39.449998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.450510] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22f78b0 was disconnected and freed. reset controller. 00:28:01.692 [2024-07-14 14:03:39.450622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.450644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.450658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.450672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.450685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.450698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.450711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.450724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.450736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cbf00 is same with the state(5) to be set 00:28:01.692 [2024-07-14 14:03:39.450787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.450807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.450821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.450833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.450846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.450859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.450882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.450897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.450909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252810 is same with the state(5) to be set 00:28:01.692 [2024-07-14 14:03:39.450968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.450987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.451001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.451014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.451033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.451047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.451060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.451073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.451085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2225190 is same with the state(5) to be set 00:28:01.692 [2024-07-14 14:03:39.451129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.451149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.451163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.451176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.451189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.451213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.451226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.692 [2024-07-14 14:03:39.451239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.692 [2024-07-14 14:03:39.451251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2227300 is same with the state(5) to be set 00:28:01.692 [2024-07-14 14:03:39.451294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9ec0 is same with the state(5) to be set 00:28:01.693 [2024-07-14 14:03:39.451450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255f90 is same with the state(5) to be set 00:28:01.693 [2024-07-14 14:03:39.451610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23caf50 is same with the state(5) to be set 00:28:01.693 [2024-07-14 14:03:39.451766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227ff90 is same with the state(5) to be set 00:28:01.693 [2024-07-14 14:03:39.451934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.451984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.451998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.452010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.452023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.452036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.452048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224a6b0 is same with the state(5) to be set 00:28:01.693 [2024-07-14 14:03:39.452094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.452114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.452128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.452141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.452154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.452178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.452191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:01.693 [2024-07-14 14:03:39.452204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.452216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f610 is same with the state(5) to be set 00:28:01.693 [2024-07-14 14:03:39.453193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.693 [2024-07-14 14:03:39.453671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.693 [2024-07-14 14:03:39.453686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.453700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.453718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.453733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.453748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.453761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.453776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.453789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.453804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.453817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.453833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.453846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.453861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.453874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.453899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.453914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.453934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.453947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.453962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.453976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.453991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.454004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.454019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.454032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.454047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.454060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.454075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.454092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.454107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.454121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.694 [2024-07-14 14:03:39.454136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.694 [2024-07-14 14:03:39.454150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.454984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.454999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:01.695 [2024-07-14 14:03:39.455208] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2222010 was disconnected and freed. reset controller. 00:28:01.695 [2024-07-14 14:03:39.455264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.695 [2024-07-14 14:03:39.455545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.695 [2024-07-14 14:03:39.455558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.455983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.455996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.696 [2024-07-14 14:03:39.456765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.696 [2024-07-14 14:03:39.456784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.456798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.456813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.456826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.456840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.456853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.456868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.456888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.456903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.456954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.456971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.456985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.457000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.457014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.457028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.457042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.457057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.457070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.457085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.457097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.457112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.457125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.457140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.457153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.457167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22233e0 is same with the state(5) to be set 00:28:01.697 [2024-07-14 14:03:39.465890] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22233e0 was disconnected and freed. reset controller. 00:28:01.697 [2024-07-14 14:03:39.467151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cbf00 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.467197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2252810 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.467231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2225190 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.467259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2227300 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.467287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9ec0 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.467316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255f90 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.467344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23caf50 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.467368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227ff90 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.467391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224a6b0 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.467420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f610 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.470378] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:01.697 [2024-07-14 14:03:39.471193] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:01.697 [2024-07-14 14:03:39.471225] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:01.697 [2024-07-14 14:03:39.471376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.697 [2024-07-14 14:03:39.471408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227ff90 with addr=10.0.0.2, port=4420 00:28:01.697 [2024-07-14 14:03:39.471426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227ff90 is same with the state(5) to be set 00:28:01.697 [2024-07-14 14:03:39.472475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.697 [2024-07-14 14:03:39.472505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23cbf00 with addr=10.0.0.2, port=4420 00:28:01.697 [2024-07-14 14:03:39.472522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cbf00 is same with the state(5) to be set 00:28:01.697 [2024-07-14 14:03:39.472620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.697 [2024-07-14 14:03:39.472643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23caf50 with addr=10.0.0.2, port=4420 00:28:01.697 [2024-07-14 14:03:39.472658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23caf50 is same with the state(5) to be set 00:28:01.697 [2024-07-14 14:03:39.472679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227ff90 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.472738] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:01.697 [2024-07-14 14:03:39.472793] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:01.697 [2024-07-14 14:03:39.472858] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:01.697 [2024-07-14 14:03:39.472945] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:01.697 [2024-07-14 14:03:39.473012] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:01.697 [2024-07-14 14:03:39.473079] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:01.697 [2024-07-14 14:03:39.473155] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:01.697 [2024-07-14 14:03:39.473227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cbf00 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.473252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23caf50 (9): Bad file descriptor 00:28:01.697 [2024-07-14 14:03:39.473268] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:01.697 [2024-07-14 14:03:39.473281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:01.697 [2024-07-14 14:03:39.473296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:01.697 [2024-07-14 14:03:39.473341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.697 [2024-07-14 14:03:39.473768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.697 [2024-07-14 14:03:39.473783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.473795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.473810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.473822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.473837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.473851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.473866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.473895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.473913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.473937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.473952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.473965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.473979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.473992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.698 [2024-07-14 14:03:39.474833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.698 [2024-07-14 14:03:39.474846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.474861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.474874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.474900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.474914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.474933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.474946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.474961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.474974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.474989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.475002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.475016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.475029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.475044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.475057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.475072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.475086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.475101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.475118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.475134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.475147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.475161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.475174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.475189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.475201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.475215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22feef0 is same with the state(5) to be set 00:28:01.699 [2024-07-14 14:03:39.475313] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22feef0 was disconnected and freed. reset controller. 00:28:01.699 [2024-07-14 14:03:39.475428] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.699 [2024-07-14 14:03:39.475454] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:01.699 [2024-07-14 14:03:39.475470] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:01.699 [2024-07-14 14:03:39.475484] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:01.699 [2024-07-14 14:03:39.475502] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:01.699 [2024-07-14 14:03:39.475515] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:01.699 [2024-07-14 14:03:39.475527] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:01.699 [2024-07-14 14:03:39.476733] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.699 [2024-07-14 14:03:39.476767] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.699 [2024-07-14 14:03:39.476780] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:01.699 [2024-07-14 14:03:39.476977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.699 [2024-07-14 14:03:39.477005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2227300 with addr=10.0.0.2, port=4420 00:28:01.699 [2024-07-14 14:03:39.477021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2227300 is same with the state(5) to be set 00:28:01.699 [2024-07-14 14:03:39.477343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2227300 (9): Bad file descriptor 00:28:01.699 [2024-07-14 14:03:39.477479] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:01.699 [2024-07-14 14:03:39.477502] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:01.699 [2024-07-14 14:03:39.477516] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:01.699 [2024-07-14 14:03:39.477591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.477984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.477999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.478018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.478034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.478047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.478063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.478076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.478091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.478104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.478119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.478132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.478147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.478160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.478179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.699 [2024-07-14 14:03:39.478192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.699 [2024-07-14 14:03:39.478207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.478979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.478994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.700 [2024-07-14 14:03:39.479431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.700 [2024-07-14 14:03:39.479446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.479462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.479477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23001d0 is same with the state(5) to be set 00:28:01.701 [2024-07-14 14:03:39.480721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.480744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.480762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.480777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.480792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.480806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.480821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.480834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.480849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.480863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.480885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.480901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.480922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.480936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.480951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.480964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.480979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.480991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.701 [2024-07-14 14:03:39.481925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.701 [2024-07-14 14:03:39.481941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.481954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.481969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.481982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.481997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.482577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.482591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237dcd0 is same with the state(5) to be set 00:28:01.702 [2024-07-14 14:03:39.483828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.483850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.483870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.483899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.483915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.483931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.483946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.483959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.483974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.483988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.702 [2024-07-14 14:03:39.484404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.702 [2024-07-14 14:03:39.484417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.484979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.484994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.703 [2024-07-14 14:03:39.485481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.703 [2024-07-14 14:03:39.485494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.485508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.485521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.485536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.485549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.485564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.485577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.485601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.485615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.485630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.485643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.485658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.485671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.485686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.485699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.485713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237f1f0 is same with the state(5) to be set 00:28:01.704 [2024-07-14 14:03:39.486947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.486970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.486990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.704 [2024-07-14 14:03:39.487938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.704 [2024-07-14 14:03:39.487952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.487967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.487980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.487995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.488540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.488553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.497059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.497114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.497131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.497145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.497173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.497188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.497204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.497218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.497234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.497247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.497263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.497276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.497292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.497305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.497321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.497334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.497349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23806f0 is same with the state(5) to be set 00:28:01.705 [2024-07-14 14:03:39.498697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.498722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.498744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.498759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.498774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.498787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.498802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.498816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.498831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.498844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.498858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.498872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.498896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.498916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.498932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.498945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.498961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.498974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.498988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.499001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.499016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.499030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.499045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.499058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.499073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.705 [2024-07-14 14:03:39.499086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.705 [2024-07-14 14:03:39.499101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.499968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.499981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.706 [2024-07-14 14:03:39.500312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.706 [2024-07-14 14:03:39.500326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.500341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.500357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.500373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.500386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.500401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.500415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.500430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.500443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.500458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.500471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.500486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.500500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.500515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.500528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.500543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.500556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.500570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221f630 is same with the state(5) to be set 00:28:01.707 [2024-07-14 14:03:39.501812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.501835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.501855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.501870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.501893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.501908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.501923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.501937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.501952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.501970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.501985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.501999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.707 [2024-07-14 14:03:39.502632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.707 [2024-07-14 14:03:39.502648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.502981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.502996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.708 [2024-07-14 14:03:39.503659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:01.708 [2024-07-14 14:03:39.503673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2220b10 is same with the state(5) to be set 00:28:01.708 [2024-07-14 14:03:39.505275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.708 [2024-07-14 14:03:39.505305] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:01.708 [2024-07-14 14:03:39.505327] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:01.708 [2024-07-14 14:03:39.505344] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:01.708 [2024-07-14 14:03:39.505457] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:01.708 [2024-07-14 14:03:39.505485] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:01.708 [2024-07-14 14:03:39.505510] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:01.708 [2024-07-14 14:03:39.505970] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:01.708 [2024-07-14 14:03:39.505999] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:01.708 task offset: 17792 on job bdev=Nvme10n1 fails 00:28:01.708 00:28:01.708 Latency(us) 00:28:01.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.708 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.708 Job: Nvme1n1 ended in about 0.80 seconds with error 00:28:01.708 Verification LBA range: start 0x0 length 0x400 00:28:01.708 Nvme1n1 : 0.80 159.42 9.96 79.71 0.00 264310.83 8155.59 253211.69 00:28:01.709 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.709 Job: Nvme2n1 ended in about 0.81 seconds with error 00:28:01.709 Verification LBA range: start 0x0 length 0x400 00:28:01.709 Nvme2n1 : 0.81 158.65 9.92 79.32 0.00 259559.22 18252.99 239230.67 00:28:01.709 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.709 Job: Nvme3n1 ended in about 0.81 seconds with error 00:28:01.709 Verification LBA range: start 0x0 length 0x400 00:28:01.709 Nvme3n1 : 0.81 158.04 9.88 79.02 0.00 254408.63 28738.75 260978.92 00:28:01.709 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.709 Job: Nvme4n1 ended in about 0.81 seconds with error 00:28:01.709 Verification LBA range: start 0x0 length 0x400 00:28:01.709 Nvme4n1 : 0.81 157.43 9.84 78.72 0.00 249324.53 20583.16 248551.35 00:28:01.709 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.709 Job: Nvme5n1 ended in about 0.82 seconds with error 00:28:01.709 Verification LBA range: start 0x0 length 0x400 00:28:01.709 Nvme5n1 : 0.82 155.21 9.70 77.61 0.00 247105.61 18835.53 254765.13 00:28:01.709 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.709 Job: Nvme6n1 ended in about 0.83 seconds with error 00:28:01.709 Verification LBA range: start 0x0 length 0x400 00:28:01.709 Nvme6n1 : 0.83 154.62 9.66 77.31 0.00 242068.29 18058.81 256318.58 00:28:01.709 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.709 Job: Nvme7n1 ended in about 0.83 seconds with error 00:28:01.709 Verification LBA range: start 0x0 length 0x400 00:28:01.709 Nvme7n1 : 0.83 154.04 9.63 77.02 0.00 237046.14 21165.70 233016.89 00:28:01.709 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.709 Job: Nvme8n1 ended in about 0.80 seconds with error 00:28:01.709 Verification LBA range: start 0x0 length 0x400 00:28:01.709 Nvme8n1 : 0.80 160.91 10.06 80.45 0.00 219277.84 15243.19 233016.89 00:28:01.709 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.709 Job: Nvme9n1 ended in about 0.80 seconds with error 00:28:01.709 Verification LBA range: start 0x0 length 0x400 00:28:01.709 Nvme9n1 : 0.80 160.69 10.04 80.35 0.00 213930.79 20777.34 279620.27 00:28:01.709 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.709 Job: Nvme10n1 ended in about 0.79 seconds with error 00:28:01.709 Verification LBA range: start 0x0 length 0x400 00:28:01.709 Nvme10n1 : 0.79 161.35 10.08 80.67 0.00 207039.65 22039.51 251658.24 00:28:01.709 =================================================================================================================== 00:28:01.709 Total : 1580.36 98.77 790.18 0.00 239407.15 8155.59 279620.27 00:28:01.709 [2024-07-14 14:03:39.531434] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:01.709 [2024-07-14 14:03:39.531520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:01.709 [2024-07-14 14:03:39.531827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.709 [2024-07-14 14:03:39.531886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2225190 with addr=10.0.0.2, port=4420 00:28:01.709 [2024-07-14 14:03:39.531909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2225190 is same with the state(5) to be set 00:28:01.709 [2024-07-14 14:03:39.531996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.709 [2024-07-14 14:03:39.532021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2252810 with addr=10.0.0.2, port=4420 00:28:01.709 [2024-07-14 14:03:39.532036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252810 is same with the state(5) to be set 00:28:01.709 [2024-07-14 14:03:39.532133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.709 [2024-07-14 14:03:39.532156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2255f90 with addr=10.0.0.2, port=4420 00:28:01.709 [2024-07-14 14:03:39.532171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255f90 is same with the state(5) to be set 00:28:01.709 [2024-07-14 14:03:39.533779] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:01.709 [2024-07-14 14:03:39.533808] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:01.709 [2024-07-14 14:03:39.533825] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:01.709 [2024-07-14 14:03:39.534028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.709 [2024-07-14 14:03:39.534058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224a6b0 with addr=10.0.0.2, port=4420 00:28:01.709 [2024-07-14 14:03:39.534074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224a6b0 is same with the state(5) to be set 00:28:01.709 [2024-07-14 14:03:39.534165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.709 [2024-07-14 14:03:39.534193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e9ec0 with addr=10.0.0.2, port=4420 00:28:01.709 [2024-07-14 14:03:39.534208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9ec0 is same with the state(5) to be set 00:28:01.709 [2024-07-14 14:03:39.534300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.709 [2024-07-14 14:03:39.534323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1f610 with addr=10.0.0.2, port=4420 00:28:01.709 [2024-07-14 14:03:39.534338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f610 is same with the state(5) to be set 00:28:01.709 [2024-07-14 14:03:39.534362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2225190 (9): Bad file descriptor 00:28:01.709 [2024-07-14 14:03:39.534384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2252810 (9): Bad file descriptor 00:28:01.709 [2024-07-14 14:03:39.534401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255f90 (9): Bad file descriptor 00:28:01.709 [2024-07-14 14:03:39.534452] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:01.709 [2024-07-14 14:03:39.534478] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:01.709 [2024-07-14 14:03:39.534498] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:01.709 [2024-07-14 14:03:39.534518] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:01.709 [2024-07-14 14:03:39.534592] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:01.709 [2024-07-14 14:03:39.534739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.709 [2024-07-14 14:03:39.534765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227ff90 with addr=10.0.0.2, port=4420 00:28:01.709 [2024-07-14 14:03:39.534785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227ff90 is same with the state(5) to be set 00:28:01.709 [2024-07-14 14:03:39.534869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.709 [2024-07-14 14:03:39.534902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23caf50 with addr=10.0.0.2, port=4420 00:28:01.709 [2024-07-14 14:03:39.534918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23caf50 is same with the state(5) to be set 00:28:01.709 [2024-07-14 14:03:39.535001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.709 [2024-07-14 14:03:39.535025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23cbf00 with addr=10.0.0.2, port=4420 00:28:01.709 [2024-07-14 14:03:39.535040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cbf00 is same with the state(5) to be set 00:28:01.709 [2024-07-14 14:03:39.535057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224a6b0 (9): Bad file descriptor 00:28:01.709 [2024-07-14 14:03:39.535076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9ec0 (9): Bad file descriptor 00:28:01.709 [2024-07-14 14:03:39.535092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1f610 (9): Bad file descriptor 00:28:01.709 [2024-07-14 14:03:39.535107] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:01.709 [2024-07-14 14:03:39.535120] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:01.709 [2024-07-14 14:03:39.535134] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:01.709 [2024-07-14 14:03:39.535153] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:01.709 [2024-07-14 14:03:39.535166] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:01.709 [2024-07-14 14:03:39.535178] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:01.709 [2024-07-14 14:03:39.535194] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:01.709 [2024-07-14 14:03:39.535209] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:01.709 [2024-07-14 14:03:39.535221] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:01.709 [2024-07-14 14:03:39.535317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.709 [2024-07-14 14:03:39.535338] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.709 [2024-07-14 14:03:39.535349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.709 [2024-07-14 14:03:39.535439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.709 [2024-07-14 14:03:39.535464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2227300 with addr=10.0.0.2, port=4420 00:28:01.709 [2024-07-14 14:03:39.535478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2227300 is same with the state(5) to be set 00:28:01.709 [2024-07-14 14:03:39.535496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227ff90 (9): Bad file descriptor 00:28:01.709 [2024-07-14 14:03:39.535514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23caf50 (9): Bad file descriptor 00:28:01.709 [2024-07-14 14:03:39.535531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cbf00 (9): Bad file descriptor 00:28:01.709 [2024-07-14 14:03:39.535545] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:01.709 [2024-07-14 14:03:39.535557] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:01.709 [2024-07-14 14:03:39.535575] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:01.709 [2024-07-14 14:03:39.535592] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:01.709 [2024-07-14 14:03:39.535606] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:01.709 [2024-07-14 14:03:39.535618] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:01.709 [2024-07-14 14:03:39.535633] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:01.709 [2024-07-14 14:03:39.535645] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:01.709 [2024-07-14 14:03:39.535657] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:01.709 [2024-07-14 14:03:39.535694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.709 [2024-07-14 14:03:39.535712] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.709 [2024-07-14 14:03:39.535723] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.709 [2024-07-14 14:03:39.535740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2227300 (9): Bad file descriptor 00:28:01.710 [2024-07-14 14:03:39.535756] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:01.710 [2024-07-14 14:03:39.535768] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:01.710 [2024-07-14 14:03:39.535780] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:01.710 [2024-07-14 14:03:39.535796] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:01.710 [2024-07-14 14:03:39.535809] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:01.710 [2024-07-14 14:03:39.535821] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:01.710 [2024-07-14 14:03:39.535836] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:01.710 [2024-07-14 14:03:39.535849] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:01.710 [2024-07-14 14:03:39.535860] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:01.710 [2024-07-14 14:03:39.535907] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.710 [2024-07-14 14:03:39.535926] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.710 [2024-07-14 14:03:39.535937] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:01.710 [2024-07-14 14:03:39.535957] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:01.710 [2024-07-14 14:03:39.535969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:01.710 [2024-07-14 14:03:39.535981] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:01.710 [2024-07-14 14:03:39.536015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.274 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:02.274 14:03:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:03.206 14:03:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1541363 00:28:03.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1541363) - No such process 00:28:03.206 14:03:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:03.206 14:03:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:03.206 14:03:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:03.206 rmmod nvme_tcp 00:28:03.206 rmmod nvme_fabrics 00:28:03.206 rmmod nvme_keyring 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.206 14:03:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.731 14:03:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:05.731 00:28:05.731 real 0m7.146s 00:28:05.731 user 0m16.664s 00:28:05.731 sys 0m1.375s 00:28:05.731 14:03:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:05.731 14:03:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:05.731 ************************************ 00:28:05.731 END TEST nvmf_shutdown_tc3 00:28:05.731 ************************************ 00:28:05.731 14:03:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:05.731 00:28:05.731 real 0m26.711s 00:28:05.731 user 1m13.353s 00:28:05.731 sys 0m6.185s 00:28:05.731 14:03:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:05.731 14:03:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:05.731 ************************************ 00:28:05.731 END TEST nvmf_shutdown 00:28:05.731 ************************************ 00:28:05.731 14:03:43 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:05.731 14:03:43 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.731 14:03:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:05.731 14:03:43 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:05.731 14:03:43 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:05.731 14:03:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:05.731 14:03:43 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:05.731 14:03:43 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:05.731 14:03:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:05.731 14:03:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:05.731 14:03:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:05.731 ************************************ 00:28:05.731 START TEST nvmf_multicontroller 00:28:05.731 ************************************ 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:05.731 * Looking for test storage... 00:28:05.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:05.731 14:03:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:07.630 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:07.630 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.630 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:07.631 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:07.631 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:07.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:28:07.631 00:28:07.631 --- 10.0.0.2 ping statistics --- 00:28:07.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.631 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:28:07.631 00:28:07.631 --- 10.0.0.1 ping statistics --- 00:28:07.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.631 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1543868 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1543868 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1543868 ']' 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:07.631 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.631 [2024-07-14 14:03:45.355069] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:07.631 [2024-07-14 14:03:45.355140] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.631 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.631 [2024-07-14 14:03:45.420049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:07.631 [2024-07-14 14:03:45.504350] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.631 [2024-07-14 14:03:45.504421] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.631 [2024-07-14 14:03:45.504434] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.631 [2024-07-14 14:03:45.504453] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.631 [2024-07-14 14:03:45.504477] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.631 [2024-07-14 14:03:45.504575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:07.631 [2024-07-14 14:03:45.504642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:07.631 [2024-07-14 14:03:45.504643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 [2024-07-14 14:03:45.652063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 Malloc0 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 [2024-07-14 14:03:45.722686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 [2024-07-14 14:03:45.730545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 Malloc1 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1543890 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1543890 /var/tmp/bdevperf.sock 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1543890 ']' 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:07.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:07.890 14:03:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.148 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:08.148 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:08.148 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:08.148 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.148 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.406 NVMe0n1 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.406 1 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.406 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.406 request: 00:28:08.406 { 00:28:08.406 "name": "NVMe0", 00:28:08.407 "trtype": "tcp", 00:28:08.407 "traddr": "10.0.0.2", 00:28:08.407 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:08.407 "hostaddr": "10.0.0.2", 00:28:08.407 "hostsvcid": "60000", 00:28:08.407 "adrfam": "ipv4", 00:28:08.407 "trsvcid": "4420", 00:28:08.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.407 "method": "bdev_nvme_attach_controller", 00:28:08.407 "req_id": 1 00:28:08.407 } 00:28:08.407 Got JSON-RPC error response 00:28:08.407 response: 00:28:08.407 { 00:28:08.407 "code": -114, 00:28:08.407 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:08.407 } 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.407 request: 00:28:08.407 { 00:28:08.407 "name": "NVMe0", 00:28:08.407 "trtype": "tcp", 00:28:08.407 "traddr": "10.0.0.2", 00:28:08.407 "hostaddr": "10.0.0.2", 00:28:08.407 "hostsvcid": "60000", 00:28:08.407 "adrfam": "ipv4", 00:28:08.407 "trsvcid": "4420", 00:28:08.407 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:08.407 "method": "bdev_nvme_attach_controller", 00:28:08.407 "req_id": 1 00:28:08.407 } 00:28:08.407 Got JSON-RPC error response 00:28:08.407 response: 00:28:08.407 { 00:28:08.407 "code": -114, 00:28:08.407 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:08.407 } 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.407 request: 00:28:08.407 { 00:28:08.407 "name": "NVMe0", 00:28:08.407 "trtype": "tcp", 00:28:08.407 "traddr": "10.0.0.2", 00:28:08.407 "hostaddr": "10.0.0.2", 00:28:08.407 "hostsvcid": "60000", 00:28:08.407 "adrfam": "ipv4", 00:28:08.407 "trsvcid": "4420", 00:28:08.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.407 "multipath": "disable", 00:28:08.407 "method": "bdev_nvme_attach_controller", 00:28:08.407 "req_id": 1 00:28:08.407 } 00:28:08.407 Got JSON-RPC error response 00:28:08.407 response: 00:28:08.407 { 00:28:08.407 "code": -114, 00:28:08.407 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:08.407 } 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.407 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.407 request: 00:28:08.407 { 00:28:08.407 "name": "NVMe0", 00:28:08.407 "trtype": "tcp", 00:28:08.407 "traddr": "10.0.0.2", 00:28:08.407 "hostaddr": "10.0.0.2", 00:28:08.407 "hostsvcid": "60000", 00:28:08.407 "adrfam": "ipv4", 00:28:08.407 "trsvcid": "4420", 00:28:08.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.407 "multipath": "failover", 00:28:08.407 "method": "bdev_nvme_attach_controller", 00:28:08.407 "req_id": 1 00:28:08.407 } 00:28:08.408 Got JSON-RPC error response 00:28:08.408 response: 00:28:08.408 { 00:28:08.408 "code": -114, 00:28:08.408 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:08.408 } 00:28:08.408 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:08.408 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:08.408 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:08.408 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:08.408 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:08.408 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:08.408 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.408 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.665 00:28:08.665 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.665 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:08.665 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.665 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.665 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.665 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:08.666 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.666 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.666 00:28:08.666 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.666 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:08.666 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:08.666 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.666 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.666 14:03:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.666 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:08.666 14:03:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:10.039 0 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1543890 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1543890 ']' 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1543890 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1543890 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1543890' 00:28:10.039 killing process with pid 1543890 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1543890 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1543890 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:10.039 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:10.039 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:10.039 [2024-07-14 14:03:45.836216] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:10.039 [2024-07-14 14:03:45.836296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543890 ] 00:28:10.039 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.039 [2024-07-14 14:03:45.896029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.039 [2024-07-14 14:03:45.983066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.039 [2024-07-14 14:03:46.551256] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name da21edb8-6869-4674-8404-a60410f6ece1 already exists 00:28:10.039 [2024-07-14 14:03:46.551295] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:da21edb8-6869-4674-8404-a60410f6ece1 alias for bdev NVMe1n1 00:28:10.039 [2024-07-14 14:03:46.551329] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:10.039 Running I/O for 1 seconds... 00:28:10.039 00:28:10.039 Latency(us) 00:28:10.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.039 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:10.039 NVMe0n1 : 1.01 18951.52 74.03 0.00 0.00 6744.33 3373.89 12039.21 00:28:10.040 =================================================================================================================== 00:28:10.040 Total : 18951.52 74.03 0.00 0.00 6744.33 3373.89 12039.21 00:28:10.040 Received shutdown signal, test time was about 1.000000 seconds 00:28:10.040 00:28:10.040 Latency(us) 00:28:10.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.040 =================================================================================================================== 00:28:10.040 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.040 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:10.040 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:10.040 14:03:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:10.040 14:03:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:10.040 14:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:10.040 14:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:10.040 14:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.040 14:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:10.040 14:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.040 14:03:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.040 rmmod nvme_tcp 00:28:10.040 rmmod nvme_fabrics 00:28:10.298 rmmod nvme_keyring 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1543868 ']' 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1543868 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1543868 ']' 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1543868 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1543868 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1543868' 00:28:10.298 killing process with pid 1543868 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1543868 00:28:10.298 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1543868 00:28:10.556 14:03:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:10.556 14:03:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:10.556 14:03:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:10.556 14:03:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.556 14:03:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.556 14:03:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.556 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.556 14:03:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.460 14:03:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:12.460 00:28:12.460 real 0m7.218s 00:28:12.460 user 0m11.369s 00:28:12.460 sys 0m2.212s 00:28:12.460 14:03:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:12.460 14:03:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:12.460 ************************************ 00:28:12.460 END TEST nvmf_multicontroller 00:28:12.460 ************************************ 00:28:12.460 14:03:50 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:12.460 14:03:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:12.460 14:03:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:12.460 14:03:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:12.719 ************************************ 00:28:12.719 START TEST nvmf_aer 00:28:12.719 ************************************ 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:12.719 * Looking for test storage... 00:28:12.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:12.719 14:03:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:14.621 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:14.621 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.621 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:14.621 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:14.622 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:14.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:28:14.622 00:28:14.622 --- 10.0.0.2 ping statistics --- 00:28:14.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.622 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:28:14.622 00:28:14.622 --- 10.0.0.1 ping statistics --- 00:28:14.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.622 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1546098 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1546098 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 1546098 ']' 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:14.622 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:14.880 [2024-07-14 14:03:52.616378] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:14.880 [2024-07-14 14:03:52.616450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.881 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.881 [2024-07-14 14:03:52.680002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:14.881 [2024-07-14 14:03:52.769089] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.881 [2024-07-14 14:03:52.769140] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.881 [2024-07-14 14:03:52.769179] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.881 [2024-07-14 14:03:52.769190] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.881 [2024-07-14 14:03:52.769200] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.881 [2024-07-14 14:03:52.769270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.881 [2024-07-14 14:03:52.769330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.881 [2024-07-14 14:03:52.769637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.881 [2024-07-14 14:03:52.769641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.139 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:15.139 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:15.139 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:15.139 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.140 [2024-07-14 14:03:52.916586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.140 Malloc0 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.140 [2024-07-14 14:03:52.970171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.140 [ 00:28:15.140 { 00:28:15.140 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:15.140 "subtype": "Discovery", 00:28:15.140 "listen_addresses": [], 00:28:15.140 "allow_any_host": true, 00:28:15.140 "hosts": [] 00:28:15.140 }, 00:28:15.140 { 00:28:15.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.140 "subtype": "NVMe", 00:28:15.140 "listen_addresses": [ 00:28:15.140 { 00:28:15.140 "trtype": "TCP", 00:28:15.140 "adrfam": "IPv4", 00:28:15.140 "traddr": "10.0.0.2", 00:28:15.140 "trsvcid": "4420" 00:28:15.140 } 00:28:15.140 ], 00:28:15.140 "allow_any_host": true, 00:28:15.140 "hosts": [], 00:28:15.140 "serial_number": "SPDK00000000000001", 00:28:15.140 "model_number": "SPDK bdev Controller", 00:28:15.140 "max_namespaces": 2, 00:28:15.140 "min_cntlid": 1, 00:28:15.140 "max_cntlid": 65519, 00:28:15.140 "namespaces": [ 00:28:15.140 { 00:28:15.140 "nsid": 1, 00:28:15.140 "bdev_name": "Malloc0", 00:28:15.140 "name": "Malloc0", 00:28:15.140 "nguid": "A02BA0E1C63D47BFBC24CEB0C46122D0", 00:28:15.140 "uuid": "a02ba0e1-c63d-47bf-bc24-ceb0c46122d0" 00:28:15.140 } 00:28:15.140 ] 00:28:15.140 } 00:28:15.140 ] 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1546126 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:15.140 14:03:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:15.140 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.140 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:15.140 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:15.140 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:15.140 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:15.412 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:15.412 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:15.412 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:15.412 14:03:53 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:15.413 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.413 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.413 Malloc1 00:28:15.413 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.413 14:03:53 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:15.413 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.413 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.413 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.413 14:03:53 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:15.413 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.413 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.413 Asynchronous Event Request test 00:28:15.413 Attaching to 10.0.0.2 00:28:15.413 Attached to 10.0.0.2 00:28:15.413 Registering asynchronous event callbacks... 00:28:15.413 Starting namespace attribute notice tests for all controllers... 00:28:15.413 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:15.413 aer_cb - Changed Namespace 00:28:15.413 Cleaning up... 00:28:15.413 [ 00:28:15.413 { 00:28:15.414 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:15.414 "subtype": "Discovery", 00:28:15.414 "listen_addresses": [], 00:28:15.414 "allow_any_host": true, 00:28:15.414 "hosts": [] 00:28:15.414 }, 00:28:15.414 { 00:28:15.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.414 "subtype": "NVMe", 00:28:15.414 "listen_addresses": [ 00:28:15.414 { 00:28:15.414 "trtype": "TCP", 00:28:15.414 "adrfam": "IPv4", 00:28:15.414 "traddr": "10.0.0.2", 00:28:15.414 "trsvcid": "4420" 00:28:15.414 } 00:28:15.414 ], 00:28:15.414 "allow_any_host": true, 00:28:15.414 "hosts": [], 00:28:15.414 "serial_number": "SPDK00000000000001", 00:28:15.414 "model_number": "SPDK bdev Controller", 00:28:15.414 "max_namespaces": 2, 00:28:15.414 "min_cntlid": 1, 00:28:15.414 "max_cntlid": 65519, 00:28:15.414 "namespaces": [ 00:28:15.414 { 00:28:15.414 "nsid": 1, 00:28:15.414 "bdev_name": "Malloc0", 00:28:15.414 "name": "Malloc0", 00:28:15.414 "nguid": "A02BA0E1C63D47BFBC24CEB0C46122D0", 00:28:15.414 "uuid": "a02ba0e1-c63d-47bf-bc24-ceb0c46122d0" 00:28:15.414 }, 00:28:15.414 { 00:28:15.414 "nsid": 2, 00:28:15.414 "bdev_name": "Malloc1", 00:28:15.414 "name": "Malloc1", 00:28:15.414 "nguid": "3BE3CCC2D82E43E192E1DF240832358C", 00:28:15.414 "uuid": "3be3ccc2-d82e-43e1-92e1-df240832358c" 00:28:15.414 } 00:28:15.414 ] 00:28:15.414 } 00:28:15.414 ] 00:28:15.414 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.414 14:03:53 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1546126 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:15.415 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:15.415 rmmod nvme_tcp 00:28:15.415 rmmod nvme_fabrics 00:28:15.678 rmmod nvme_keyring 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1546098 ']' 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1546098 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 1546098 ']' 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 1546098 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1546098 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1546098' 00:28:15.678 killing process with pid 1546098 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 1546098 00:28:15.678 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 1546098 00:28:15.936 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:15.936 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:15.936 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:15.936 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:15.936 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:15.936 14:03:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.936 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.936 14:03:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.840 14:03:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:17.840 00:28:17.840 real 0m5.275s 00:28:17.840 user 0m4.213s 00:28:17.840 sys 0m1.840s 00:28:17.840 14:03:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:17.840 14:03:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.840 ************************************ 00:28:17.840 END TEST nvmf_aer 00:28:17.840 ************************************ 00:28:17.840 14:03:55 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:17.840 14:03:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:17.840 14:03:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:17.840 14:03:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.840 ************************************ 00:28:17.840 START TEST nvmf_async_init 00:28:17.840 ************************************ 00:28:17.840 14:03:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:18.119 * Looking for test storage... 00:28:18.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d7842b38ae9947a287802ada9fcf6b97 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:18.119 14:03:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:20.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:20.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:20.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:20.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:20.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:28:20.036 00:28:20.036 --- 10.0.0.2 ping statistics --- 00:28:20.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.036 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:28:20.036 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:28:20.037 00:28:20.037 --- 10.0.0.1 ping statistics --- 00:28:20.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.037 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:28:20.037 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.037 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:20.037 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:20.037 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.037 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:20.037 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:20.037 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.037 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:20.037 14:03:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1548173 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1548173 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 1548173 ']' 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:20.037 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.295 [2024-07-14 14:03:58.060743] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:20.295 [2024-07-14 14:03:58.060822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.295 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.295 [2024-07-14 14:03:58.127815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.295 [2024-07-14 14:03:58.218979] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.295 [2024-07-14 14:03:58.219041] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.295 [2024-07-14 14:03:58.219058] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.295 [2024-07-14 14:03:58.219071] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.295 [2024-07-14 14:03:58.219082] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.295 [2024-07-14 14:03:58.219111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.553 [2024-07-14 14:03:58.354585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.553 null0 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d7842b38ae9947a287802ada9fcf6b97 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.553 [2024-07-14 14:03:58.394847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.553 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.811 nvme0n1 00:28:20.811 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.811 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:20.811 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.811 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.811 [ 00:28:20.811 { 00:28:20.811 "name": "nvme0n1", 00:28:20.811 "aliases": [ 00:28:20.811 "d7842b38-ae99-47a2-8780-2ada9fcf6b97" 00:28:20.811 ], 00:28:20.811 "product_name": "NVMe disk", 00:28:20.811 "block_size": 512, 00:28:20.811 "num_blocks": 2097152, 00:28:20.811 "uuid": "d7842b38-ae99-47a2-8780-2ada9fcf6b97", 00:28:20.811 "assigned_rate_limits": { 00:28:20.811 "rw_ios_per_sec": 0, 00:28:20.811 "rw_mbytes_per_sec": 0, 00:28:20.811 "r_mbytes_per_sec": 0, 00:28:20.811 "w_mbytes_per_sec": 0 00:28:20.811 }, 00:28:20.811 "claimed": false, 00:28:20.811 "zoned": false, 00:28:20.811 "supported_io_types": { 00:28:20.811 "read": true, 00:28:20.811 "write": true, 00:28:20.811 "unmap": false, 00:28:20.811 "write_zeroes": true, 00:28:20.811 "flush": true, 00:28:20.811 "reset": true, 00:28:20.811 "compare": true, 00:28:20.811 "compare_and_write": true, 00:28:20.811 "abort": true, 00:28:20.811 "nvme_admin": true, 00:28:20.811 "nvme_io": true 00:28:20.811 }, 00:28:20.811 "memory_domains": [ 00:28:20.811 { 00:28:20.811 "dma_device_id": "system", 00:28:20.811 "dma_device_type": 1 00:28:20.811 } 00:28:20.811 ], 00:28:20.811 "driver_specific": { 00:28:20.811 "nvme": [ 00:28:20.811 { 00:28:20.811 "trid": { 00:28:20.811 "trtype": "TCP", 00:28:20.811 "adrfam": "IPv4", 00:28:20.811 "traddr": "10.0.0.2", 00:28:20.811 "trsvcid": "4420", 00:28:20.811 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:20.811 }, 00:28:20.811 "ctrlr_data": { 00:28:20.811 "cntlid": 1, 00:28:20.811 "vendor_id": "0x8086", 00:28:20.811 "model_number": "SPDK bdev Controller", 00:28:20.811 "serial_number": "00000000000000000000", 00:28:20.811 "firmware_revision": "24.05.1", 00:28:20.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:20.811 "oacs": { 00:28:20.811 "security": 0, 00:28:20.811 "format": 0, 00:28:20.811 "firmware": 0, 00:28:20.811 "ns_manage": 0 00:28:20.811 }, 00:28:20.811 "multi_ctrlr": true, 00:28:20.811 "ana_reporting": false 00:28:20.811 }, 00:28:20.811 "vs": { 00:28:20.811 "nvme_version": "1.3" 00:28:20.811 }, 00:28:20.811 "ns_data": { 00:28:20.811 "id": 1, 00:28:20.812 "can_share": true 00:28:20.812 } 00:28:20.812 } 00:28:20.812 ], 00:28:20.812 "mp_policy": "active_passive" 00:28:20.812 } 00:28:20.812 } 00:28:20.812 ] 00:28:20.812 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.812 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:20.812 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.812 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.812 [2024-07-14 14:03:58.647352] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:20.812 [2024-07-14 14:03:58.647440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe51760 (9): Bad file descriptor 00:28:20.812 [2024-07-14 14:03:58.790026] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:20.812 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.812 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:20.812 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.812 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:21.070 [ 00:28:21.070 { 00:28:21.070 "name": "nvme0n1", 00:28:21.070 "aliases": [ 00:28:21.070 "d7842b38-ae99-47a2-8780-2ada9fcf6b97" 00:28:21.070 ], 00:28:21.070 "product_name": "NVMe disk", 00:28:21.070 "block_size": 512, 00:28:21.070 "num_blocks": 2097152, 00:28:21.070 "uuid": "d7842b38-ae99-47a2-8780-2ada9fcf6b97", 00:28:21.070 "assigned_rate_limits": { 00:28:21.070 "rw_ios_per_sec": 0, 00:28:21.070 "rw_mbytes_per_sec": 0, 00:28:21.070 "r_mbytes_per_sec": 0, 00:28:21.070 "w_mbytes_per_sec": 0 00:28:21.070 }, 00:28:21.070 "claimed": false, 00:28:21.070 "zoned": false, 00:28:21.070 "supported_io_types": { 00:28:21.070 "read": true, 00:28:21.070 "write": true, 00:28:21.070 "unmap": false, 00:28:21.070 "write_zeroes": true, 00:28:21.070 "flush": true, 00:28:21.070 "reset": true, 00:28:21.070 "compare": true, 00:28:21.070 "compare_and_write": true, 00:28:21.070 "abort": true, 00:28:21.070 "nvme_admin": true, 00:28:21.070 "nvme_io": true 00:28:21.070 }, 00:28:21.070 "memory_domains": [ 00:28:21.070 { 00:28:21.070 "dma_device_id": "system", 00:28:21.070 "dma_device_type": 1 00:28:21.070 } 00:28:21.070 ], 00:28:21.070 "driver_specific": { 00:28:21.070 "nvme": [ 00:28:21.070 { 00:28:21.070 "trid": { 00:28:21.070 "trtype": "TCP", 00:28:21.070 "adrfam": "IPv4", 00:28:21.070 "traddr": "10.0.0.2", 00:28:21.070 "trsvcid": "4420", 00:28:21.070 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:21.070 }, 00:28:21.070 "ctrlr_data": { 00:28:21.070 "cntlid": 2, 00:28:21.070 "vendor_id": "0x8086", 00:28:21.070 "model_number": "SPDK bdev Controller", 00:28:21.070 "serial_number": "00000000000000000000", 00:28:21.070 "firmware_revision": "24.05.1", 00:28:21.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:21.070 "oacs": { 00:28:21.070 "security": 0, 00:28:21.070 "format": 0, 00:28:21.070 "firmware": 0, 00:28:21.070 "ns_manage": 0 00:28:21.070 }, 00:28:21.070 "multi_ctrlr": true, 00:28:21.070 "ana_reporting": false 00:28:21.070 }, 00:28:21.070 "vs": { 00:28:21.070 "nvme_version": "1.3" 00:28:21.070 }, 00:28:21.070 "ns_data": { 00:28:21.070 "id": 1, 00:28:21.070 "can_share": true 00:28:21.070 } 00:28:21.070 } 00:28:21.070 ], 00:28:21.070 "mp_policy": "active_passive" 00:28:21.070 } 00:28:21.070 } 00:28:21.070 ] 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.7iGBj0xXo4 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.7iGBj0xXo4 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:21.070 [2024-07-14 14:03:58.839994] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:21.070 [2024-07-14 14:03:58.840118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:21.070 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7iGBj0xXo4 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:21.071 [2024-07-14 14:03:58.848016] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7iGBj0xXo4 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:21.071 [2024-07-14 14:03:58.856029] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:21.071 [2024-07-14 14:03:58.856086] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:21.071 nvme0n1 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:21.071 [ 00:28:21.071 { 00:28:21.071 "name": "nvme0n1", 00:28:21.071 "aliases": [ 00:28:21.071 "d7842b38-ae99-47a2-8780-2ada9fcf6b97" 00:28:21.071 ], 00:28:21.071 "product_name": "NVMe disk", 00:28:21.071 "block_size": 512, 00:28:21.071 "num_blocks": 2097152, 00:28:21.071 "uuid": "d7842b38-ae99-47a2-8780-2ada9fcf6b97", 00:28:21.071 "assigned_rate_limits": { 00:28:21.071 "rw_ios_per_sec": 0, 00:28:21.071 "rw_mbytes_per_sec": 0, 00:28:21.071 "r_mbytes_per_sec": 0, 00:28:21.071 "w_mbytes_per_sec": 0 00:28:21.071 }, 00:28:21.071 "claimed": false, 00:28:21.071 "zoned": false, 00:28:21.071 "supported_io_types": { 00:28:21.071 "read": true, 00:28:21.071 "write": true, 00:28:21.071 "unmap": false, 00:28:21.071 "write_zeroes": true, 00:28:21.071 "flush": true, 00:28:21.071 "reset": true, 00:28:21.071 "compare": true, 00:28:21.071 "compare_and_write": true, 00:28:21.071 "abort": true, 00:28:21.071 "nvme_admin": true, 00:28:21.071 "nvme_io": true 00:28:21.071 }, 00:28:21.071 "memory_domains": [ 00:28:21.071 { 00:28:21.071 "dma_device_id": "system", 00:28:21.071 "dma_device_type": 1 00:28:21.071 } 00:28:21.071 ], 00:28:21.071 "driver_specific": { 00:28:21.071 "nvme": [ 00:28:21.071 { 00:28:21.071 "trid": { 00:28:21.071 "trtype": "TCP", 00:28:21.071 "adrfam": "IPv4", 00:28:21.071 "traddr": "10.0.0.2", 00:28:21.071 "trsvcid": "4421", 00:28:21.071 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:21.071 }, 00:28:21.071 "ctrlr_data": { 00:28:21.071 "cntlid": 3, 00:28:21.071 "vendor_id": "0x8086", 00:28:21.071 "model_number": "SPDK bdev Controller", 00:28:21.071 "serial_number": "00000000000000000000", 00:28:21.071 "firmware_revision": "24.05.1", 00:28:21.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:21.071 "oacs": { 00:28:21.071 "security": 0, 00:28:21.071 "format": 0, 00:28:21.071 "firmware": 0, 00:28:21.071 "ns_manage": 0 00:28:21.071 }, 00:28:21.071 "multi_ctrlr": true, 00:28:21.071 "ana_reporting": false 00:28:21.071 }, 00:28:21.071 "vs": { 00:28:21.071 "nvme_version": "1.3" 00:28:21.071 }, 00:28:21.071 "ns_data": { 00:28:21.071 "id": 1, 00:28:21.071 "can_share": true 00:28:21.071 } 00:28:21.071 } 00:28:21.071 ], 00:28:21.071 "mp_policy": "active_passive" 00:28:21.071 } 00:28:21.071 } 00:28:21.071 ] 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.7iGBj0xXo4 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:21.071 14:03:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:21.071 rmmod nvme_tcp 00:28:21.071 rmmod nvme_fabrics 00:28:21.071 rmmod nvme_keyring 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1548173 ']' 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1548173 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 1548173 ']' 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 1548173 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1548173 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1548173' 00:28:21.071 killing process with pid 1548173 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 1548173 00:28:21.071 [2024-07-14 14:03:59.040563] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:21.071 [2024-07-14 14:03:59.040603] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:21.071 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 1548173 00:28:21.330 14:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:21.330 14:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:21.330 14:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:21.330 14:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:21.330 14:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:21.330 14:03:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.330 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:21.330 14:03:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.862 14:04:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:23.862 00:28:23.862 real 0m5.506s 00:28:23.862 user 0m2.093s 00:28:23.862 sys 0m1.810s 00:28:23.862 14:04:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:23.862 14:04:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.862 ************************************ 00:28:23.862 END TEST nvmf_async_init 00:28:23.862 ************************************ 00:28:23.862 14:04:01 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:23.862 14:04:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:23.863 14:04:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:23.863 14:04:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.863 ************************************ 00:28:23.863 START TEST dma 00:28:23.863 ************************************ 00:28:23.863 14:04:01 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:23.863 * Looking for test storage... 00:28:23.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:23.863 14:04:01 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.863 14:04:01 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.863 14:04:01 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.863 14:04:01 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.863 14:04:01 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.863 14:04:01 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.863 14:04:01 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.863 14:04:01 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:23.863 14:04:01 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:23.863 14:04:01 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:23.863 14:04:01 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:23.863 14:04:01 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:23.863 00:28:23.863 real 0m0.070s 00:28:23.863 user 0m0.032s 00:28:23.863 sys 0m0.044s 00:28:23.863 14:04:01 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:23.863 14:04:01 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:23.863 ************************************ 00:28:23.863 END TEST dma 00:28:23.863 ************************************ 00:28:23.863 14:04:01 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:23.863 14:04:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:23.863 14:04:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:23.863 14:04:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.863 ************************************ 00:28:23.863 START TEST nvmf_identify 00:28:23.863 ************************************ 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:23.863 * Looking for test storage... 00:28:23.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:23.863 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:23.864 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:23.864 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.864 14:04:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.864 14:04:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.864 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:23.864 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:23.864 14:04:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:23.864 14:04:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:25.762 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:25.762 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:25.762 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.762 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:25.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:25.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:28:25.763 00:28:25.763 --- 10.0.0.2 ping statistics --- 00:28:25.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.763 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:28:25.763 00:28:25.763 --- 10.0.0.1 ping statistics --- 00:28:25.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.763 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1550299 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1550299 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 1550299 ']' 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:25.763 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:25.763 [2024-07-14 14:04:03.638338] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:25.763 [2024-07-14 14:04:03.638422] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.763 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.763 [2024-07-14 14:04:03.709358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.021 [2024-07-14 14:04:03.801540] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.021 [2024-07-14 14:04:03.801602] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.021 [2024-07-14 14:04:03.801619] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.021 [2024-07-14 14:04:03.801631] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.021 [2024-07-14 14:04:03.801643] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.021 [2024-07-14 14:04:03.801734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.021 [2024-07-14 14:04:03.801789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.021 [2024-07-14 14:04:03.801833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.021 [2024-07-14 14:04:03.801835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:26.021 [2024-07-14 14:04:03.936635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:26.021 Malloc0 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.021 14:04:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:26.282 [2024-07-14 14:04:04.007601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:26.282 [ 00:28:26.282 { 00:28:26.282 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:26.282 "subtype": "Discovery", 00:28:26.282 "listen_addresses": [ 00:28:26.282 { 00:28:26.282 "trtype": "TCP", 00:28:26.282 "adrfam": "IPv4", 00:28:26.282 "traddr": "10.0.0.2", 00:28:26.282 "trsvcid": "4420" 00:28:26.282 } 00:28:26.282 ], 00:28:26.282 "allow_any_host": true, 00:28:26.282 "hosts": [] 00:28:26.282 }, 00:28:26.282 { 00:28:26.282 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.282 "subtype": "NVMe", 00:28:26.282 "listen_addresses": [ 00:28:26.282 { 00:28:26.282 "trtype": "TCP", 00:28:26.282 "adrfam": "IPv4", 00:28:26.282 "traddr": "10.0.0.2", 00:28:26.282 "trsvcid": "4420" 00:28:26.282 } 00:28:26.282 ], 00:28:26.282 "allow_any_host": true, 00:28:26.282 "hosts": [], 00:28:26.282 "serial_number": "SPDK00000000000001", 00:28:26.282 "model_number": "SPDK bdev Controller", 00:28:26.282 "max_namespaces": 32, 00:28:26.282 "min_cntlid": 1, 00:28:26.282 "max_cntlid": 65519, 00:28:26.282 "namespaces": [ 00:28:26.282 { 00:28:26.282 "nsid": 1, 00:28:26.282 "bdev_name": "Malloc0", 00:28:26.282 "name": "Malloc0", 00:28:26.282 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:26.282 "eui64": "ABCDEF0123456789", 00:28:26.282 "uuid": "0a48d8cd-0b2b-4bfb-8007-d1798d2e3a1f" 00:28:26.282 } 00:28:26.282 ] 00:28:26.282 } 00:28:26.282 ] 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.282 14:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:26.282 [2024-07-14 14:04:04.045221] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:26.282 [2024-07-14 14:04:04.045271] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550326 ] 00:28:26.282 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.282 [2024-07-14 14:04:04.080360] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:26.282 [2024-07-14 14:04:04.080413] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:26.282 [2024-07-14 14:04:04.080423] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:26.282 [2024-07-14 14:04:04.080439] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:26.282 [2024-07-14 14:04:04.080452] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:26.282 [2024-07-14 14:04:04.080653] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:26.282 [2024-07-14 14:04:04.080703] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d05980 0 00:28:26.282 [2024-07-14 14:04:04.093904] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:26.282 [2024-07-14 14:04:04.093931] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:26.282 [2024-07-14 14:04:04.093941] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:26.282 [2024-07-14 14:04:04.093947] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:26.282 [2024-07-14 14:04:04.094013] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.282 [2024-07-14 14:04:04.094025] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.282 [2024-07-14 14:04:04.094033] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d05980) 00:28:26.282 [2024-07-14 14:04:04.094050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:26.282 [2024-07-14 14:04:04.094077] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d4c0, cid 0, qid 0 00:28:26.282 [2024-07-14 14:04:04.101903] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.282 [2024-07-14 14:04:04.101922] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.282 [2024-07-14 14:04:04.101930] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.282 [2024-07-14 14:04:04.101937] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d4c0) on tqpair=0x1d05980 00:28:26.282 [2024-07-14 14:04:04.101955] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:26.282 [2024-07-14 14:04:04.101965] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:26.282 [2024-07-14 14:04:04.101974] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:26.282 [2024-07-14 14:04:04.101996] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.282 [2024-07-14 14:04:04.102005] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.282 [2024-07-14 14:04:04.102012] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d05980) 00:28:26.282 [2024-07-14 14:04:04.102023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.282 [2024-07-14 14:04:04.102047] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d4c0, cid 0, qid 0 00:28:26.282 [2024-07-14 14:04:04.102146] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.282 [2024-07-14 14:04:04.102159] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.282 [2024-07-14 14:04:04.102165] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.282 [2024-07-14 14:04:04.102172] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d4c0) on tqpair=0x1d05980 00:28:26.282 [2024-07-14 14:04:04.102195] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:26.282 [2024-07-14 14:04:04.102208] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:26.282 [2024-07-14 14:04:04.102221] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.282 [2024-07-14 14:04:04.102229] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.282 [2024-07-14 14:04:04.102235] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d05980) 00:28:26.283 [2024-07-14 14:04:04.102245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.283 [2024-07-14 14:04:04.102266] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d4c0, cid 0, qid 0 00:28:26.283 [2024-07-14 14:04:04.102344] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.283 [2024-07-14 14:04:04.102356] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.283 [2024-07-14 14:04:04.102363] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.102369] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d4c0) on tqpair=0x1d05980 00:28:26.283 [2024-07-14 14:04:04.102383] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:26.283 [2024-07-14 14:04:04.102398] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:26.283 [2024-07-14 14:04:04.102410] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.102417] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.102424] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d05980) 00:28:26.283 [2024-07-14 14:04:04.102434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.283 [2024-07-14 14:04:04.102454] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d4c0, cid 0, qid 0 00:28:26.283 [2024-07-14 14:04:04.102541] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.283 [2024-07-14 14:04:04.102553] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.283 [2024-07-14 14:04:04.102559] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.102566] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d4c0) on tqpair=0x1d05980 00:28:26.283 [2024-07-14 14:04:04.102575] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:26.283 [2024-07-14 14:04:04.102592] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.102601] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.102607] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d05980) 00:28:26.283 [2024-07-14 14:04:04.102617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.283 [2024-07-14 14:04:04.102638] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d4c0, cid 0, qid 0 00:28:26.283 [2024-07-14 14:04:04.102719] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.283 [2024-07-14 14:04:04.102733] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.283 [2024-07-14 14:04:04.102740] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.102746] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d4c0) on tqpair=0x1d05980 00:28:26.283 [2024-07-14 14:04:04.102756] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:26.283 [2024-07-14 14:04:04.102765] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:26.283 [2024-07-14 14:04:04.102777] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:26.283 [2024-07-14 14:04:04.102887] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:26.283 [2024-07-14 14:04:04.102896] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:26.283 [2024-07-14 14:04:04.102910] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.102918] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.102924] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d05980) 00:28:26.283 [2024-07-14 14:04:04.102935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.283 [2024-07-14 14:04:04.102956] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d4c0, cid 0, qid 0 00:28:26.283 [2024-07-14 14:04:04.103043] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.283 [2024-07-14 14:04:04.103055] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.283 [2024-07-14 14:04:04.103065] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103072] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d4c0) on tqpair=0x1d05980 00:28:26.283 [2024-07-14 14:04:04.103082] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:26.283 [2024-07-14 14:04:04.103098] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103107] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103113] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d05980) 00:28:26.283 [2024-07-14 14:04:04.103124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.283 [2024-07-14 14:04:04.103144] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d4c0, cid 0, qid 0 00:28:26.283 [2024-07-14 14:04:04.103236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.283 [2024-07-14 14:04:04.103250] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.283 [2024-07-14 14:04:04.103257] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103263] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d4c0) on tqpair=0x1d05980 00:28:26.283 [2024-07-14 14:04:04.103272] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:26.283 [2024-07-14 14:04:04.103281] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:26.283 [2024-07-14 14:04:04.103294] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:26.283 [2024-07-14 14:04:04.103308] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:26.283 [2024-07-14 14:04:04.103325] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103334] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d05980) 00:28:26.283 [2024-07-14 14:04:04.103345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.283 [2024-07-14 14:04:04.103365] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d4c0, cid 0, qid 0 00:28:26.283 [2024-07-14 14:04:04.103490] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.283 [2024-07-14 14:04:04.103505] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.283 [2024-07-14 14:04:04.103512] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103519] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d05980): datao=0, datal=4096, cccid=0 00:28:26.283 [2024-07-14 14:04:04.103526] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6d4c0) on tqpair(0x1d05980): expected_datao=0, payload_size=4096 00:28:26.283 [2024-07-14 14:04:04.103534] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103545] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103553] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103565] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.283 [2024-07-14 14:04:04.103575] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.283 [2024-07-14 14:04:04.103581] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103588] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d4c0) on tqpair=0x1d05980 00:28:26.283 [2024-07-14 14:04:04.103605] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:26.283 [2024-07-14 14:04:04.103618] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:26.283 [2024-07-14 14:04:04.103626] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:26.283 [2024-07-14 14:04:04.103635] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:26.283 [2024-07-14 14:04:04.103642] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:26.283 [2024-07-14 14:04:04.103650] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:26.283 [2024-07-14 14:04:04.103664] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:26.283 [2024-07-14 14:04:04.103677] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103684] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.283 [2024-07-14 14:04:04.103691] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d05980) 00:28:26.283 [2024-07-14 14:04:04.103702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:26.284 [2024-07-14 14:04:04.103722] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d4c0, cid 0, qid 0 00:28:26.284 [2024-07-14 14:04:04.103830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.284 [2024-07-14 14:04:04.103844] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.284 [2024-07-14 14:04:04.103865] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.103872] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d4c0) on tqpair=0x1d05980 00:28:26.284 [2024-07-14 14:04:04.103894] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.103902] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.103908] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d05980) 00:28:26.284 [2024-07-14 14:04:04.103918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.284 [2024-07-14 14:04:04.103928] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.103934] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.103941] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d05980) 00:28:26.284 [2024-07-14 14:04:04.103949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.284 [2024-07-14 14:04:04.103958] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.103965] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.103971] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d05980) 00:28:26.284 [2024-07-14 14:04:04.103980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.284 [2024-07-14 14:04:04.103989] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.103995] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.104001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.284 [2024-07-14 14:04:04.104010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.284 [2024-07-14 14:04:04.104018] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:26.284 [2024-07-14 14:04:04.104038] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:26.284 [2024-07-14 14:04:04.104053] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.104061] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d05980) 00:28:26.284 [2024-07-14 14:04:04.104071] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.284 [2024-07-14 14:04:04.104094] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d4c0, cid 0, qid 0 00:28:26.284 [2024-07-14 14:04:04.104106] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d620, cid 1, qid 0 00:28:26.284 [2024-07-14 14:04:04.104113] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d780, cid 2, qid 0 00:28:26.284 [2024-07-14 14:04:04.104121] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.284 [2024-07-14 14:04:04.104128] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6da40, cid 4, qid 0 00:28:26.284 [2024-07-14 14:04:04.104277] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.284 [2024-07-14 14:04:04.104290] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.284 [2024-07-14 14:04:04.104297] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.104304] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6da40) on tqpair=0x1d05980 00:28:26.284 [2024-07-14 14:04:04.104314] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:26.284 [2024-07-14 14:04:04.104322] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:26.284 [2024-07-14 14:04:04.104339] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.104348] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d05980) 00:28:26.284 [2024-07-14 14:04:04.104358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.284 [2024-07-14 14:04:04.104378] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6da40, cid 4, qid 0 00:28:26.284 [2024-07-14 14:04:04.104473] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.284 [2024-07-14 14:04:04.104486] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.284 [2024-07-14 14:04:04.104492] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.104498] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d05980): datao=0, datal=4096, cccid=4 00:28:26.284 [2024-07-14 14:04:04.104506] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6da40) on tqpair(0x1d05980): expected_datao=0, payload_size=4096 00:28:26.284 [2024-07-14 14:04:04.104513] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.104529] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.104538] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.144948] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.284 [2024-07-14 14:04:04.144967] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.284 [2024-07-14 14:04:04.144974] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.144981] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6da40) on tqpair=0x1d05980 00:28:26.284 [2024-07-14 14:04:04.145001] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:26.284 [2024-07-14 14:04:04.145037] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.145048] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d05980) 00:28:26.284 [2024-07-14 14:04:04.145060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.284 [2024-07-14 14:04:04.145077] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.145086] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.145092] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d05980) 00:28:26.284 [2024-07-14 14:04:04.145101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.284 [2024-07-14 14:04:04.145129] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6da40, cid 4, qid 0 00:28:26.284 [2024-07-14 14:04:04.145141] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dba0, cid 5, qid 0 00:28:26.284 [2024-07-14 14:04:04.145283] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.284 [2024-07-14 14:04:04.145296] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.284 [2024-07-14 14:04:04.145302] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.145324] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d05980): datao=0, datal=1024, cccid=4 00:28:26.284 [2024-07-14 14:04:04.145331] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6da40) on tqpair(0x1d05980): expected_datao=0, payload_size=1024 00:28:26.284 [2024-07-14 14:04:04.145339] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.145349] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.145356] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.145364] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.284 [2024-07-14 14:04:04.145373] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.284 [2024-07-14 14:04:04.145380] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.145386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dba0) on tqpair=0x1d05980 00:28:26.284 [2024-07-14 14:04:04.189887] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.284 [2024-07-14 14:04:04.189905] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.284 [2024-07-14 14:04:04.189911] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.189918] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6da40) on tqpair=0x1d05980 00:28:26.284 [2024-07-14 14:04:04.189936] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.189945] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d05980) 00:28:26.284 [2024-07-14 14:04:04.189956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.284 [2024-07-14 14:04:04.189998] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6da40, cid 4, qid 0 00:28:26.284 [2024-07-14 14:04:04.190099] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.284 [2024-07-14 14:04:04.190113] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.284 [2024-07-14 14:04:04.190120] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.190126] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d05980): datao=0, datal=3072, cccid=4 00:28:26.284 [2024-07-14 14:04:04.190134] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6da40) on tqpair(0x1d05980): expected_datao=0, payload_size=3072 00:28:26.284 [2024-07-14 14:04:04.190141] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.190163] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.284 [2024-07-14 14:04:04.190172] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.285 [2024-07-14 14:04:04.190213] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.285 [2024-07-14 14:04:04.190225] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.285 [2024-07-14 14:04:04.190235] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.285 [2024-07-14 14:04:04.190243] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6da40) on tqpair=0x1d05980 00:28:26.285 [2024-07-14 14:04:04.190259] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.285 [2024-07-14 14:04:04.190267] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d05980) 00:28:26.285 [2024-07-14 14:04:04.190278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.285 [2024-07-14 14:04:04.190305] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6da40, cid 4, qid 0 00:28:26.285 [2024-07-14 14:04:04.190399] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.285 [2024-07-14 14:04:04.190411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.285 [2024-07-14 14:04:04.190418] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.285 [2024-07-14 14:04:04.190424] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d05980): datao=0, datal=8, cccid=4 00:28:26.285 [2024-07-14 14:04:04.190432] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6da40) on tqpair(0x1d05980): expected_datao=0, payload_size=8 00:28:26.285 [2024-07-14 14:04:04.190439] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.285 [2024-07-14 14:04:04.190449] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.285 [2024-07-14 14:04:04.190456] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.285 [2024-07-14 14:04:04.231905] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.285 [2024-07-14 14:04:04.231924] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.285 [2024-07-14 14:04:04.231931] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.285 [2024-07-14 14:04:04.231938] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6da40) on tqpair=0x1d05980 00:28:26.285 ===================================================== 00:28:26.285 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:26.285 ===================================================== 00:28:26.285 Controller Capabilities/Features 00:28:26.285 ================================ 00:28:26.285 Vendor ID: 0000 00:28:26.285 Subsystem Vendor ID: 0000 00:28:26.285 Serial Number: .................... 00:28:26.285 Model Number: ........................................ 00:28:26.285 Firmware Version: 24.05.1 00:28:26.285 Recommended Arb Burst: 0 00:28:26.285 IEEE OUI Identifier: 00 00 00 00:28:26.285 Multi-path I/O 00:28:26.285 May have multiple subsystem ports: No 00:28:26.285 May have multiple controllers: No 00:28:26.285 Associated with SR-IOV VF: No 00:28:26.285 Max Data Transfer Size: 131072 00:28:26.285 Max Number of Namespaces: 0 00:28:26.285 Max Number of I/O Queues: 1024 00:28:26.285 NVMe Specification Version (VS): 1.3 00:28:26.285 NVMe Specification Version (Identify): 1.3 00:28:26.285 Maximum Queue Entries: 128 00:28:26.285 Contiguous Queues Required: Yes 00:28:26.285 Arbitration Mechanisms Supported 00:28:26.285 Weighted Round Robin: Not Supported 00:28:26.285 Vendor Specific: Not Supported 00:28:26.285 Reset Timeout: 15000 ms 00:28:26.285 Doorbell Stride: 4 bytes 00:28:26.285 NVM Subsystem Reset: Not Supported 00:28:26.285 Command Sets Supported 00:28:26.285 NVM Command Set: Supported 00:28:26.285 Boot Partition: Not Supported 00:28:26.285 Memory Page Size Minimum: 4096 bytes 00:28:26.285 Memory Page Size Maximum: 4096 bytes 00:28:26.285 Persistent Memory Region: Not Supported 00:28:26.285 Optional Asynchronous Events Supported 00:28:26.285 Namespace Attribute Notices: Not Supported 00:28:26.285 Firmware Activation Notices: Not Supported 00:28:26.285 ANA Change Notices: Not Supported 00:28:26.285 PLE Aggregate Log Change Notices: Not Supported 00:28:26.285 LBA Status Info Alert Notices: Not Supported 00:28:26.285 EGE Aggregate Log Change Notices: Not Supported 00:28:26.285 Normal NVM Subsystem Shutdown event: Not Supported 00:28:26.285 Zone Descriptor Change Notices: Not Supported 00:28:26.285 Discovery Log Change Notices: Supported 00:28:26.285 Controller Attributes 00:28:26.285 128-bit Host Identifier: Not Supported 00:28:26.285 Non-Operational Permissive Mode: Not Supported 00:28:26.285 NVM Sets: Not Supported 00:28:26.285 Read Recovery Levels: Not Supported 00:28:26.285 Endurance Groups: Not Supported 00:28:26.285 Predictable Latency Mode: Not Supported 00:28:26.285 Traffic Based Keep ALive: Not Supported 00:28:26.285 Namespace Granularity: Not Supported 00:28:26.285 SQ Associations: Not Supported 00:28:26.285 UUID List: Not Supported 00:28:26.285 Multi-Domain Subsystem: Not Supported 00:28:26.285 Fixed Capacity Management: Not Supported 00:28:26.285 Variable Capacity Management: Not Supported 00:28:26.285 Delete Endurance Group: Not Supported 00:28:26.285 Delete NVM Set: Not Supported 00:28:26.285 Extended LBA Formats Supported: Not Supported 00:28:26.285 Flexible Data Placement Supported: Not Supported 00:28:26.285 00:28:26.285 Controller Memory Buffer Support 00:28:26.285 ================================ 00:28:26.285 Supported: No 00:28:26.285 00:28:26.285 Persistent Memory Region Support 00:28:26.285 ================================ 00:28:26.285 Supported: No 00:28:26.285 00:28:26.285 Admin Command Set Attributes 00:28:26.285 ============================ 00:28:26.285 Security Send/Receive: Not Supported 00:28:26.285 Format NVM: Not Supported 00:28:26.285 Firmware Activate/Download: Not Supported 00:28:26.285 Namespace Management: Not Supported 00:28:26.285 Device Self-Test: Not Supported 00:28:26.285 Directives: Not Supported 00:28:26.285 NVMe-MI: Not Supported 00:28:26.285 Virtualization Management: Not Supported 00:28:26.285 Doorbell Buffer Config: Not Supported 00:28:26.285 Get LBA Status Capability: Not Supported 00:28:26.285 Command & Feature Lockdown Capability: Not Supported 00:28:26.285 Abort Command Limit: 1 00:28:26.285 Async Event Request Limit: 4 00:28:26.285 Number of Firmware Slots: N/A 00:28:26.285 Firmware Slot 1 Read-Only: N/A 00:28:26.285 Firmware Activation Without Reset: N/A 00:28:26.285 Multiple Update Detection Support: N/A 00:28:26.285 Firmware Update Granularity: No Information Provided 00:28:26.285 Per-Namespace SMART Log: No 00:28:26.285 Asymmetric Namespace Access Log Page: Not Supported 00:28:26.285 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:26.285 Command Effects Log Page: Not Supported 00:28:26.285 Get Log Page Extended Data: Supported 00:28:26.285 Telemetry Log Pages: Not Supported 00:28:26.285 Persistent Event Log Pages: Not Supported 00:28:26.285 Supported Log Pages Log Page: May Support 00:28:26.285 Commands Supported & Effects Log Page: Not Supported 00:28:26.285 Feature Identifiers & Effects Log Page:May Support 00:28:26.285 NVMe-MI Commands & Effects Log Page: May Support 00:28:26.285 Data Area 4 for Telemetry Log: Not Supported 00:28:26.286 Error Log Page Entries Supported: 128 00:28:26.286 Keep Alive: Not Supported 00:28:26.286 00:28:26.286 NVM Command Set Attributes 00:28:26.286 ========================== 00:28:26.286 Submission Queue Entry Size 00:28:26.286 Max: 1 00:28:26.286 Min: 1 00:28:26.286 Completion Queue Entry Size 00:28:26.286 Max: 1 00:28:26.286 Min: 1 00:28:26.286 Number of Namespaces: 0 00:28:26.286 Compare Command: Not Supported 00:28:26.286 Write Uncorrectable Command: Not Supported 00:28:26.286 Dataset Management Command: Not Supported 00:28:26.286 Write Zeroes Command: Not Supported 00:28:26.286 Set Features Save Field: Not Supported 00:28:26.286 Reservations: Not Supported 00:28:26.286 Timestamp: Not Supported 00:28:26.286 Copy: Not Supported 00:28:26.286 Volatile Write Cache: Not Present 00:28:26.286 Atomic Write Unit (Normal): 1 00:28:26.286 Atomic Write Unit (PFail): 1 00:28:26.286 Atomic Compare & Write Unit: 1 00:28:26.286 Fused Compare & Write: Supported 00:28:26.286 Scatter-Gather List 00:28:26.286 SGL Command Set: Supported 00:28:26.286 SGL Keyed: Supported 00:28:26.286 SGL Bit Bucket Descriptor: Not Supported 00:28:26.286 SGL Metadata Pointer: Not Supported 00:28:26.286 Oversized SGL: Not Supported 00:28:26.286 SGL Metadata Address: Not Supported 00:28:26.286 SGL Offset: Supported 00:28:26.286 Transport SGL Data Block: Not Supported 00:28:26.286 Replay Protected Memory Block: Not Supported 00:28:26.286 00:28:26.286 Firmware Slot Information 00:28:26.286 ========================= 00:28:26.286 Active slot: 0 00:28:26.286 00:28:26.286 00:28:26.286 Error Log 00:28:26.286 ========= 00:28:26.286 00:28:26.286 Active Namespaces 00:28:26.286 ================= 00:28:26.286 Discovery Log Page 00:28:26.286 ================== 00:28:26.286 Generation Counter: 2 00:28:26.286 Number of Records: 2 00:28:26.286 Record Format: 0 00:28:26.286 00:28:26.286 Discovery Log Entry 0 00:28:26.286 ---------------------- 00:28:26.286 Transport Type: 3 (TCP) 00:28:26.286 Address Family: 1 (IPv4) 00:28:26.286 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:26.286 Entry Flags: 00:28:26.286 Duplicate Returned Information: 1 00:28:26.286 Explicit Persistent Connection Support for Discovery: 1 00:28:26.286 Transport Requirements: 00:28:26.286 Secure Channel: Not Required 00:28:26.286 Port ID: 0 (0x0000) 00:28:26.286 Controller ID: 65535 (0xffff) 00:28:26.286 Admin Max SQ Size: 128 00:28:26.286 Transport Service Identifier: 4420 00:28:26.286 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:26.286 Transport Address: 10.0.0.2 00:28:26.286 Discovery Log Entry 1 00:28:26.286 ---------------------- 00:28:26.286 Transport Type: 3 (TCP) 00:28:26.286 Address Family: 1 (IPv4) 00:28:26.286 Subsystem Type: 2 (NVM Subsystem) 00:28:26.286 Entry Flags: 00:28:26.286 Duplicate Returned Information: 0 00:28:26.286 Explicit Persistent Connection Support for Discovery: 0 00:28:26.286 Transport Requirements: 00:28:26.286 Secure Channel: Not Required 00:28:26.286 Port ID: 0 (0x0000) 00:28:26.286 Controller ID: 65535 (0xffff) 00:28:26.286 Admin Max SQ Size: 128 00:28:26.286 Transport Service Identifier: 4420 00:28:26.286 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:26.286 Transport Address: 10.0.0.2 [2024-07-14 14:04:04.232068] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:26.286 [2024-07-14 14:04:04.232093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.286 [2024-07-14 14:04:04.232106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.286 [2024-07-14 14:04:04.232115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.286 [2024-07-14 14:04:04.232125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.286 [2024-07-14 14:04:04.232142] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.286 [2024-07-14 14:04:04.232151] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.286 [2024-07-14 14:04:04.232157] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.286 [2024-07-14 14:04:04.232168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.286 [2024-07-14 14:04:04.232218] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.286 [2024-07-14 14:04:04.232309] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.286 [2024-07-14 14:04:04.232324] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.286 [2024-07-14 14:04:04.232331] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.286 [2024-07-14 14:04:04.232338] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.286 [2024-07-14 14:04:04.232351] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.286 [2024-07-14 14:04:04.232359] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.286 [2024-07-14 14:04:04.232365] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.286 [2024-07-14 14:04:04.232380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.286 [2024-07-14 14:04:04.232407] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.286 [2024-07-14 14:04:04.232506] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.286 [2024-07-14 14:04:04.232520] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.286 [2024-07-14 14:04:04.232526] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.286 [2024-07-14 14:04:04.232533] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.286 [2024-07-14 14:04:04.232543] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:26.286 [2024-07-14 14:04:04.232551] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:26.286 [2024-07-14 14:04:04.232567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.286 [2024-07-14 14:04:04.232576] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.286 [2024-07-14 14:04:04.232582] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.286 [2024-07-14 14:04:04.232592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.286 [2024-07-14 14:04:04.232613] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.286 [2024-07-14 14:04:04.232693] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.286 [2024-07-14 14:04:04.232707] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.286 [2024-07-14 14:04:04.232713] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.286 [2024-07-14 14:04:04.232720] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.286 [2024-07-14 14:04:04.232738] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.286 [2024-07-14 14:04:04.232747] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.286 [2024-07-14 14:04:04.232754] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.287 [2024-07-14 14:04:04.232764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.287 [2024-07-14 14:04:04.232784] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.287 [2024-07-14 14:04:04.232863] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.287 [2024-07-14 14:04:04.232883] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.287 [2024-07-14 14:04:04.232891] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.232897] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.287 [2024-07-14 14:04:04.232915] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.232924] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.232931] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.287 [2024-07-14 14:04:04.232941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.287 [2024-07-14 14:04:04.232961] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.287 [2024-07-14 14:04:04.233046] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.287 [2024-07-14 14:04:04.233060] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.287 [2024-07-14 14:04:04.233066] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233073] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.287 [2024-07-14 14:04:04.233094] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233105] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233111] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.287 [2024-07-14 14:04:04.233121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.287 [2024-07-14 14:04:04.233141] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.287 [2024-07-14 14:04:04.233227] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.287 [2024-07-14 14:04:04.233239] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.287 [2024-07-14 14:04:04.233245] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233252] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.287 [2024-07-14 14:04:04.233268] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233278] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233284] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.287 [2024-07-14 14:04:04.233294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.287 [2024-07-14 14:04:04.233314] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.287 [2024-07-14 14:04:04.233394] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.287 [2024-07-14 14:04:04.233408] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.287 [2024-07-14 14:04:04.233414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233421] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.287 [2024-07-14 14:04:04.233438] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233448] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233454] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.287 [2024-07-14 14:04:04.233464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.287 [2024-07-14 14:04:04.233484] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.287 [2024-07-14 14:04:04.233560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.287 [2024-07-14 14:04:04.233574] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.287 [2024-07-14 14:04:04.233581] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233587] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.287 [2024-07-14 14:04:04.233605] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233614] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233620] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.287 [2024-07-14 14:04:04.233631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.287 [2024-07-14 14:04:04.233650] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.287 [2024-07-14 14:04:04.233723] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.287 [2024-07-14 14:04:04.233734] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.287 [2024-07-14 14:04:04.233741] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233748] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.287 [2024-07-14 14:04:04.233764] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233779] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233787] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.287 [2024-07-14 14:04:04.233797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.287 [2024-07-14 14:04:04.233817] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.287 [2024-07-14 14:04:04.233912] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.287 [2024-07-14 14:04:04.233927] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.287 [2024-07-14 14:04:04.233934] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233940] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.287 [2024-07-14 14:04:04.233958] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233967] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.233973] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.287 [2024-07-14 14:04:04.233984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.287 [2024-07-14 14:04:04.234004] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.287 [2024-07-14 14:04:04.234076] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.287 [2024-07-14 14:04:04.234088] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.287 [2024-07-14 14:04:04.234094] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.287 [2024-07-14 14:04:04.234101] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.288 [2024-07-14 14:04:04.234118] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234127] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234133] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.288 [2024-07-14 14:04:04.234143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.288 [2024-07-14 14:04:04.234163] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.288 [2024-07-14 14:04:04.234242] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.288 [2024-07-14 14:04:04.234256] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.288 [2024-07-14 14:04:04.234263] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234269] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.288 [2024-07-14 14:04:04.234287] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234296] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234302] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.288 [2024-07-14 14:04:04.234312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.288 [2024-07-14 14:04:04.234332] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.288 [2024-07-14 14:04:04.234408] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.288 [2024-07-14 14:04:04.234419] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.288 [2024-07-14 14:04:04.234426] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234433] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.288 [2024-07-14 14:04:04.234449] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234459] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234469] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.288 [2024-07-14 14:04:04.234479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.288 [2024-07-14 14:04:04.234499] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.288 [2024-07-14 14:04:04.234577] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.288 [2024-07-14 14:04:04.234589] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.288 [2024-07-14 14:04:04.234595] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234602] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.288 [2024-07-14 14:04:04.234619] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234628] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234634] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.288 [2024-07-14 14:04:04.234644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.288 [2024-07-14 14:04:04.234664] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.288 [2024-07-14 14:04:04.234736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.288 [2024-07-14 14:04:04.234748] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.288 [2024-07-14 14:04:04.234755] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234761] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.288 [2024-07-14 14:04:04.234778] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234793] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.288 [2024-07-14 14:04:04.234804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.288 [2024-07-14 14:04:04.234823] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.288 [2024-07-14 14:04:04.234913] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.288 [2024-07-14 14:04:04.234928] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.288 [2024-07-14 14:04:04.234934] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.288 [2024-07-14 14:04:04.234958] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234968] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.234974] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.288 [2024-07-14 14:04:04.234984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.288 [2024-07-14 14:04:04.235004] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.288 [2024-07-14 14:04:04.235078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.288 [2024-07-14 14:04:04.235090] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.288 [2024-07-14 14:04:04.235097] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235103] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.288 [2024-07-14 14:04:04.235120] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235130] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235136] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.288 [2024-07-14 14:04:04.235151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.288 [2024-07-14 14:04:04.235178] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.288 [2024-07-14 14:04:04.235255] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.288 [2024-07-14 14:04:04.235266] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.288 [2024-07-14 14:04:04.235273] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235279] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.288 [2024-07-14 14:04:04.235296] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235305] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235312] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.288 [2024-07-14 14:04:04.235322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.288 [2024-07-14 14:04:04.235341] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.288 [2024-07-14 14:04:04.235414] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.288 [2024-07-14 14:04:04.235425] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.288 [2024-07-14 14:04:04.235432] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235439] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.288 [2024-07-14 14:04:04.235456] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235465] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235471] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.288 [2024-07-14 14:04:04.235481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.288 [2024-07-14 14:04:04.235501] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.288 [2024-07-14 14:04:04.235577] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.288 [2024-07-14 14:04:04.235591] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.288 [2024-07-14 14:04:04.235597] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235604] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.288 [2024-07-14 14:04:04.235621] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235631] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.288 [2024-07-14 14:04:04.235637] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.289 [2024-07-14 14:04:04.235647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.289 [2024-07-14 14:04:04.235667] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.289 [2024-07-14 14:04:04.235742] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.289 [2024-07-14 14:04:04.235754] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.289 [2024-07-14 14:04:04.235761] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.289 [2024-07-14 14:04:04.235767] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.289 [2024-07-14 14:04:04.235784] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.289 [2024-07-14 14:04:04.235793] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.289 [2024-07-14 14:04:04.235799] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.289 [2024-07-14 14:04:04.235809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.289 [2024-07-14 14:04:04.235833] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.289 [2024-07-14 14:04:04.239894] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.289 [2024-07-14 14:04:04.239911] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.289 [2024-07-14 14:04:04.239917] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.289 [2024-07-14 14:04:04.239924] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.289 [2024-07-14 14:04:04.239942] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.289 [2024-07-14 14:04:04.239952] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.289 [2024-07-14 14:04:04.239958] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d05980) 00:28:26.289 [2024-07-14 14:04:04.239969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.289 [2024-07-14 14:04:04.239990] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6d8e0, cid 3, qid 0 00:28:26.289 [2024-07-14 14:04:04.240078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.289 [2024-07-14 14:04:04.240090] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.289 [2024-07-14 14:04:04.240096] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.289 [2024-07-14 14:04:04.240103] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6d8e0) on tqpair=0x1d05980 00:28:26.289 [2024-07-14 14:04:04.240117] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:28:26.289 00:28:26.289 14:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:26.550 [2024-07-14 14:04:04.270992] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:26.550 [2024-07-14 14:04:04.271029] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550350 ] 00:28:26.550 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.550 [2024-07-14 14:04:04.304698] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:26.550 [2024-07-14 14:04:04.304742] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:26.550 [2024-07-14 14:04:04.304752] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:26.550 [2024-07-14 14:04:04.304765] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:26.550 [2024-07-14 14:04:04.304777] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:26.550 [2024-07-14 14:04:04.305005] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:26.550 [2024-07-14 14:04:04.305047] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19b6980 0 00:28:26.550 [2024-07-14 14:04:04.319893] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:26.550 [2024-07-14 14:04:04.319912] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:26.550 [2024-07-14 14:04:04.319920] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:26.550 [2024-07-14 14:04:04.319926] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:26.550 [2024-07-14 14:04:04.319980] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.319995] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.320003] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6980) 00:28:26.550 [2024-07-14 14:04:04.320018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:26.550 [2024-07-14 14:04:04.320045] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e4c0, cid 0, qid 0 00:28:26.550 [2024-07-14 14:04:04.327888] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.550 [2024-07-14 14:04:04.327905] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.550 [2024-07-14 14:04:04.327913] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.327920] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e4c0) on tqpair=0x19b6980 00:28:26.550 [2024-07-14 14:04:04.327935] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:26.550 [2024-07-14 14:04:04.327960] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:26.550 [2024-07-14 14:04:04.327970] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:26.550 [2024-07-14 14:04:04.327989] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.327998] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.328004] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6980) 00:28:26.550 [2024-07-14 14:04:04.328016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.550 [2024-07-14 14:04:04.328040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e4c0, cid 0, qid 0 00:28:26.550 [2024-07-14 14:04:04.328133] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.550 [2024-07-14 14:04:04.328145] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.550 [2024-07-14 14:04:04.328153] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.328160] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e4c0) on tqpair=0x19b6980 00:28:26.550 [2024-07-14 14:04:04.328173] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:26.550 [2024-07-14 14:04:04.328187] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:26.550 [2024-07-14 14:04:04.328200] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.328208] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.328214] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6980) 00:28:26.550 [2024-07-14 14:04:04.328225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.550 [2024-07-14 14:04:04.328246] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e4c0, cid 0, qid 0 00:28:26.550 [2024-07-14 14:04:04.328326] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.550 [2024-07-14 14:04:04.328341] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.550 [2024-07-14 14:04:04.328348] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.328355] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e4c0) on tqpair=0x19b6980 00:28:26.550 [2024-07-14 14:04:04.328365] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:26.550 [2024-07-14 14:04:04.328379] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:26.550 [2024-07-14 14:04:04.328391] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.328399] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.328409] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6980) 00:28:26.550 [2024-07-14 14:04:04.328421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.550 [2024-07-14 14:04:04.328442] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e4c0, cid 0, qid 0 00:28:26.550 [2024-07-14 14:04:04.328524] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.550 [2024-07-14 14:04:04.328538] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.550 [2024-07-14 14:04:04.328545] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.328552] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e4c0) on tqpair=0x19b6980 00:28:26.550 [2024-07-14 14:04:04.328562] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:26.550 [2024-07-14 14:04:04.328579] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.328588] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.328595] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6980) 00:28:26.550 [2024-07-14 14:04:04.328605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.550 [2024-07-14 14:04:04.328626] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e4c0, cid 0, qid 0 00:28:26.550 [2024-07-14 14:04:04.328700] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.550 [2024-07-14 14:04:04.328712] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.550 [2024-07-14 14:04:04.328719] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.550 [2024-07-14 14:04:04.328726] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e4c0) on tqpair=0x19b6980 00:28:26.550 [2024-07-14 14:04:04.328735] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:26.550 [2024-07-14 14:04:04.328743] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:26.550 [2024-07-14 14:04:04.328756] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:26.551 [2024-07-14 14:04:04.328866] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:26.551 [2024-07-14 14:04:04.328873] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:26.551 [2024-07-14 14:04:04.328893] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.328901] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.328908] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6980) 00:28:26.551 [2024-07-14 14:04:04.328918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.551 [2024-07-14 14:04:04.328939] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e4c0, cid 0, qid 0 00:28:26.551 [2024-07-14 14:04:04.329033] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.551 [2024-07-14 14:04:04.329047] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.551 [2024-07-14 14:04:04.329054] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329061] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e4c0) on tqpair=0x19b6980 00:28:26.551 [2024-07-14 14:04:04.329071] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:26.551 [2024-07-14 14:04:04.329087] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329100] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329107] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6980) 00:28:26.551 [2024-07-14 14:04:04.329118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.551 [2024-07-14 14:04:04.329139] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e4c0, cid 0, qid 0 00:28:26.551 [2024-07-14 14:04:04.329221] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.551 [2024-07-14 14:04:04.329233] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.551 [2024-07-14 14:04:04.329240] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329247] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e4c0) on tqpair=0x19b6980 00:28:26.551 [2024-07-14 14:04:04.329255] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:26.551 [2024-07-14 14:04:04.329264] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:26.551 [2024-07-14 14:04:04.329277] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:26.551 [2024-07-14 14:04:04.329291] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:26.551 [2024-07-14 14:04:04.329307] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329315] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6980) 00:28:26.551 [2024-07-14 14:04:04.329326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.551 [2024-07-14 14:04:04.329347] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e4c0, cid 0, qid 0 00:28:26.551 [2024-07-14 14:04:04.329471] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.551 [2024-07-14 14:04:04.329484] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.551 [2024-07-14 14:04:04.329491] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329497] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6980): datao=0, datal=4096, cccid=0 00:28:26.551 [2024-07-14 14:04:04.329505] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1e4c0) on tqpair(0x19b6980): expected_datao=0, payload_size=4096 00:28:26.551 [2024-07-14 14:04:04.329513] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329523] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329531] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329542] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.551 [2024-07-14 14:04:04.329552] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.551 [2024-07-14 14:04:04.329559] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329566] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e4c0) on tqpair=0x19b6980 00:28:26.551 [2024-07-14 14:04:04.329582] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:26.551 [2024-07-14 14:04:04.329591] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:26.551 [2024-07-14 14:04:04.329599] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:26.551 [2024-07-14 14:04:04.329606] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:26.551 [2024-07-14 14:04:04.329613] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:26.551 [2024-07-14 14:04:04.329625] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:26.551 [2024-07-14 14:04:04.329640] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:26.551 [2024-07-14 14:04:04.329651] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329659] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329666] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6980) 00:28:26.551 [2024-07-14 14:04:04.329677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:26.551 [2024-07-14 14:04:04.329698] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e4c0, cid 0, qid 0 00:28:26.551 [2024-07-14 14:04:04.329790] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.551 [2024-07-14 14:04:04.329804] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.551 [2024-07-14 14:04:04.329812] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329818] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e4c0) on tqpair=0x19b6980 00:28:26.551 [2024-07-14 14:04:04.329830] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329838] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329844] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6980) 00:28:26.551 [2024-07-14 14:04:04.329854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.551 [2024-07-14 14:04:04.329864] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329886] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19b6980) 00:28:26.551 [2024-07-14 14:04:04.329896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.551 [2024-07-14 14:04:04.329906] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329913] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329919] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19b6980) 00:28:26.551 [2024-07-14 14:04:04.329928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.551 [2024-07-14 14:04:04.329937] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329944] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.329951] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.551 [2024-07-14 14:04:04.329959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.551 [2024-07-14 14:04:04.329968] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:26.551 [2024-07-14 14:04:04.329987] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:26.551 [2024-07-14 14:04:04.330000] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.551 [2024-07-14 14:04:04.330007] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6980) 00:28:26.552 [2024-07-14 14:04:04.330017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.552 [2024-07-14 14:04:04.330040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e4c0, cid 0, qid 0 00:28:26.552 [2024-07-14 14:04:04.330055] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e620, cid 1, qid 0 00:28:26.552 [2024-07-14 14:04:04.330063] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e780, cid 2, qid 0 00:28:26.552 [2024-07-14 14:04:04.330071] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.552 [2024-07-14 14:04:04.330078] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ea40, cid 4, qid 0 00:28:26.552 [2024-07-14 14:04:04.330193] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.552 [2024-07-14 14:04:04.330208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.552 [2024-07-14 14:04:04.330215] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.330222] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1ea40) on tqpair=0x19b6980 00:28:26.552 [2024-07-14 14:04:04.330231] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:26.552 [2024-07-14 14:04:04.330240] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:26.552 [2024-07-14 14:04:04.330254] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:26.552 [2024-07-14 14:04:04.330265] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:26.552 [2024-07-14 14:04:04.330275] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.330283] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.330290] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6980) 00:28:26.552 [2024-07-14 14:04:04.330300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:26.552 [2024-07-14 14:04:04.330321] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ea40, cid 4, qid 0 00:28:26.552 [2024-07-14 14:04:04.333888] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.552 [2024-07-14 14:04:04.333905] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.552 [2024-07-14 14:04:04.333913] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.333920] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1ea40) on tqpair=0x19b6980 00:28:26.552 [2024-07-14 14:04:04.333991] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:26.552 [2024-07-14 14:04:04.334011] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:26.552 [2024-07-14 14:04:04.334026] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334033] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6980) 00:28:26.552 [2024-07-14 14:04:04.334045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.552 [2024-07-14 14:04:04.334067] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ea40, cid 4, qid 0 00:28:26.552 [2024-07-14 14:04:04.334162] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.552 [2024-07-14 14:04:04.334174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.552 [2024-07-14 14:04:04.334181] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334188] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6980): datao=0, datal=4096, cccid=4 00:28:26.552 [2024-07-14 14:04:04.334195] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ea40) on tqpair(0x19b6980): expected_datao=0, payload_size=4096 00:28:26.552 [2024-07-14 14:04:04.334203] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334226] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334236] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334247] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.552 [2024-07-14 14:04:04.334257] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.552 [2024-07-14 14:04:04.334264] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1ea40) on tqpair=0x19b6980 00:28:26.552 [2024-07-14 14:04:04.334286] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:26.552 [2024-07-14 14:04:04.334307] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:26.552 [2024-07-14 14:04:04.334324] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:26.552 [2024-07-14 14:04:04.334338] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334346] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6980) 00:28:26.552 [2024-07-14 14:04:04.334357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.552 [2024-07-14 14:04:04.334378] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ea40, cid 4, qid 0 00:28:26.552 [2024-07-14 14:04:04.334492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.552 [2024-07-14 14:04:04.334507] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.552 [2024-07-14 14:04:04.334514] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334520] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6980): datao=0, datal=4096, cccid=4 00:28:26.552 [2024-07-14 14:04:04.334528] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ea40) on tqpair(0x19b6980): expected_datao=0, payload_size=4096 00:28:26.552 [2024-07-14 14:04:04.334535] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334552] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334561] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334590] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.552 [2024-07-14 14:04:04.334604] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.552 [2024-07-14 14:04:04.334611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334618] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1ea40) on tqpair=0x19b6980 00:28:26.552 [2024-07-14 14:04:04.334638] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:26.552 [2024-07-14 14:04:04.334656] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:26.552 [2024-07-14 14:04:04.334670] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334678] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6980) 00:28:26.552 [2024-07-14 14:04:04.334689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.552 [2024-07-14 14:04:04.334710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ea40, cid 4, qid 0 00:28:26.552 [2024-07-14 14:04:04.334798] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.552 [2024-07-14 14:04:04.334810] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.552 [2024-07-14 14:04:04.334817] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334823] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6980): datao=0, datal=4096, cccid=4 00:28:26.552 [2024-07-14 14:04:04.334835] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ea40) on tqpair(0x19b6980): expected_datao=0, payload_size=4096 00:28:26.552 [2024-07-14 14:04:04.334843] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334870] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334890] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334902] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.552 [2024-07-14 14:04:04.334912] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.552 [2024-07-14 14:04:04.334919] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.552 [2024-07-14 14:04:04.334926] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1ea40) on tqpair=0x19b6980 00:28:26.552 [2024-07-14 14:04:04.334940] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:26.552 [2024-07-14 14:04:04.334955] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:26.552 [2024-07-14 14:04:04.334969] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:26.553 [2024-07-14 14:04:04.334980] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:26.553 [2024-07-14 14:04:04.334989] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:26.553 [2024-07-14 14:04:04.334998] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:26.553 [2024-07-14 14:04:04.335006] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:26.553 [2024-07-14 14:04:04.335014] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:26.553 [2024-07-14 14:04:04.335037] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335047] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6980) 00:28:26.553 [2024-07-14 14:04:04.335058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.553 [2024-07-14 14:04:04.335069] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b6980) 00:28:26.553 [2024-07-14 14:04:04.335092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:26.553 [2024-07-14 14:04:04.335116] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ea40, cid 4, qid 0 00:28:26.553 [2024-07-14 14:04:04.335128] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1eba0, cid 5, qid 0 00:28:26.553 [2024-07-14 14:04:04.335225] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.553 [2024-07-14 14:04:04.335238] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.553 [2024-07-14 14:04:04.335245] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335252] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1ea40) on tqpair=0x19b6980 00:28:26.553 [2024-07-14 14:04:04.335263] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.553 [2024-07-14 14:04:04.335272] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.553 [2024-07-14 14:04:04.335279] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335285] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1eba0) on tqpair=0x19b6980 00:28:26.553 [2024-07-14 14:04:04.335306] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335316] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b6980) 00:28:26.553 [2024-07-14 14:04:04.335326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.553 [2024-07-14 14:04:04.335347] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1eba0, cid 5, qid 0 00:28:26.553 [2024-07-14 14:04:04.335429] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.553 [2024-07-14 14:04:04.335443] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.553 [2024-07-14 14:04:04.335451] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335457] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1eba0) on tqpair=0x19b6980 00:28:26.553 [2024-07-14 14:04:04.335475] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335484] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b6980) 00:28:26.553 [2024-07-14 14:04:04.335494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.553 [2024-07-14 14:04:04.335514] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1eba0, cid 5, qid 0 00:28:26.553 [2024-07-14 14:04:04.335595] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.553 [2024-07-14 14:04:04.335607] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.553 [2024-07-14 14:04:04.335614] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335621] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1eba0) on tqpair=0x19b6980 00:28:26.553 [2024-07-14 14:04:04.335638] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335647] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b6980) 00:28:26.553 [2024-07-14 14:04:04.335657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.553 [2024-07-14 14:04:04.335677] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1eba0, cid 5, qid 0 00:28:26.553 [2024-07-14 14:04:04.335754] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.553 [2024-07-14 14:04:04.335766] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.553 [2024-07-14 14:04:04.335773] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335780] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1eba0) on tqpair=0x19b6980 00:28:26.553 [2024-07-14 14:04:04.335800] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335810] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b6980) 00:28:26.553 [2024-07-14 14:04:04.335820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.553 [2024-07-14 14:04:04.335832] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335839] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6980) 00:28:26.553 [2024-07-14 14:04:04.335849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.553 [2024-07-14 14:04:04.335860] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335867] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19b6980) 00:28:26.553 [2024-07-14 14:04:04.335884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.553 [2024-07-14 14:04:04.335901] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.335910] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19b6980) 00:28:26.553 [2024-07-14 14:04:04.335920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.553 [2024-07-14 14:04:04.335941] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1eba0, cid 5, qid 0 00:28:26.553 [2024-07-14 14:04:04.335952] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ea40, cid 4, qid 0 00:28:26.553 [2024-07-14 14:04:04.335960] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ed00, cid 6, qid 0 00:28:26.553 [2024-07-14 14:04:04.335967] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ee60, cid 7, qid 0 00:28:26.553 [2024-07-14 14:04:04.336124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.553 [2024-07-14 14:04:04.336139] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.553 [2024-07-14 14:04:04.336146] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.336153] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6980): datao=0, datal=8192, cccid=5 00:28:26.553 [2024-07-14 14:04:04.336161] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1eba0) on tqpair(0x19b6980): expected_datao=0, payload_size=8192 00:28:26.553 [2024-07-14 14:04:04.336168] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.336189] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.336199] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.336208] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.553 [2024-07-14 14:04:04.336217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.553 [2024-07-14 14:04:04.336224] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.336230] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6980): datao=0, datal=512, cccid=4 00:28:26.553 [2024-07-14 14:04:04.336238] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ea40) on tqpair(0x19b6980): expected_datao=0, payload_size=512 00:28:26.553 [2024-07-14 14:04:04.336245] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.336254] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.336262] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.553 [2024-07-14 14:04:04.336270] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.553 [2024-07-14 14:04:04.336279] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.553 [2024-07-14 14:04:04.336286] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336292] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6980): datao=0, datal=512, cccid=6 00:28:26.554 [2024-07-14 14:04:04.336300] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ed00) on tqpair(0x19b6980): expected_datao=0, payload_size=512 00:28:26.554 [2024-07-14 14:04:04.336307] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336316] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336323] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336331] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:26.554 [2024-07-14 14:04:04.336340] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:26.554 [2024-07-14 14:04:04.336347] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336353] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6980): datao=0, datal=4096, cccid=7 00:28:26.554 [2024-07-14 14:04:04.336361] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ee60) on tqpair(0x19b6980): expected_datao=0, payload_size=4096 00:28:26.554 [2024-07-14 14:04:04.336372] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336382] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336389] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336401] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.554 [2024-07-14 14:04:04.336410] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.554 [2024-07-14 14:04:04.336417] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336424] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1eba0) on tqpair=0x19b6980 00:28:26.554 [2024-07-14 14:04:04.336443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.554 [2024-07-14 14:04:04.336455] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.554 [2024-07-14 14:04:04.336462] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336469] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1ea40) on tqpair=0x19b6980 00:28:26.554 [2024-07-14 14:04:04.336499] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.554 [2024-07-14 14:04:04.336510] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.554 [2024-07-14 14:04:04.336517] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336523] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1ed00) on tqpair=0x19b6980 00:28:26.554 [2024-07-14 14:04:04.336537] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.554 [2024-07-14 14:04:04.336562] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.554 [2024-07-14 14:04:04.336569] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.554 [2024-07-14 14:04:04.336575] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1ee60) on tqpair=0x19b6980 00:28:26.554 ===================================================== 00:28:26.554 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:26.554 ===================================================== 00:28:26.554 Controller Capabilities/Features 00:28:26.554 ================================ 00:28:26.554 Vendor ID: 8086 00:28:26.554 Subsystem Vendor ID: 8086 00:28:26.554 Serial Number: SPDK00000000000001 00:28:26.554 Model Number: SPDK bdev Controller 00:28:26.554 Firmware Version: 24.05.1 00:28:26.554 Recommended Arb Burst: 6 00:28:26.554 IEEE OUI Identifier: e4 d2 5c 00:28:26.554 Multi-path I/O 00:28:26.554 May have multiple subsystem ports: Yes 00:28:26.554 May have multiple controllers: Yes 00:28:26.554 Associated with SR-IOV VF: No 00:28:26.554 Max Data Transfer Size: 131072 00:28:26.554 Max Number of Namespaces: 32 00:28:26.554 Max Number of I/O Queues: 127 00:28:26.554 NVMe Specification Version (VS): 1.3 00:28:26.554 NVMe Specification Version (Identify): 1.3 00:28:26.554 Maximum Queue Entries: 128 00:28:26.554 Contiguous Queues Required: Yes 00:28:26.554 Arbitration Mechanisms Supported 00:28:26.554 Weighted Round Robin: Not Supported 00:28:26.554 Vendor Specific: Not Supported 00:28:26.554 Reset Timeout: 15000 ms 00:28:26.554 Doorbell Stride: 4 bytes 00:28:26.554 NVM Subsystem Reset: Not Supported 00:28:26.554 Command Sets Supported 00:28:26.554 NVM Command Set: Supported 00:28:26.554 Boot Partition: Not Supported 00:28:26.554 Memory Page Size Minimum: 4096 bytes 00:28:26.554 Memory Page Size Maximum: 4096 bytes 00:28:26.554 Persistent Memory Region: Not Supported 00:28:26.554 Optional Asynchronous Events Supported 00:28:26.554 Namespace Attribute Notices: Supported 00:28:26.554 Firmware Activation Notices: Not Supported 00:28:26.554 ANA Change Notices: Not Supported 00:28:26.554 PLE Aggregate Log Change Notices: Not Supported 00:28:26.554 LBA Status Info Alert Notices: Not Supported 00:28:26.554 EGE Aggregate Log Change Notices: Not Supported 00:28:26.554 Normal NVM Subsystem Shutdown event: Not Supported 00:28:26.554 Zone Descriptor Change Notices: Not Supported 00:28:26.554 Discovery Log Change Notices: Not Supported 00:28:26.554 Controller Attributes 00:28:26.554 128-bit Host Identifier: Supported 00:28:26.554 Non-Operational Permissive Mode: Not Supported 00:28:26.554 NVM Sets: Not Supported 00:28:26.554 Read Recovery Levels: Not Supported 00:28:26.554 Endurance Groups: Not Supported 00:28:26.554 Predictable Latency Mode: Not Supported 00:28:26.554 Traffic Based Keep ALive: Not Supported 00:28:26.554 Namespace Granularity: Not Supported 00:28:26.554 SQ Associations: Not Supported 00:28:26.554 UUID List: Not Supported 00:28:26.554 Multi-Domain Subsystem: Not Supported 00:28:26.554 Fixed Capacity Management: Not Supported 00:28:26.554 Variable Capacity Management: Not Supported 00:28:26.554 Delete Endurance Group: Not Supported 00:28:26.554 Delete NVM Set: Not Supported 00:28:26.554 Extended LBA Formats Supported: Not Supported 00:28:26.554 Flexible Data Placement Supported: Not Supported 00:28:26.554 00:28:26.554 Controller Memory Buffer Support 00:28:26.554 ================================ 00:28:26.554 Supported: No 00:28:26.554 00:28:26.554 Persistent Memory Region Support 00:28:26.554 ================================ 00:28:26.554 Supported: No 00:28:26.554 00:28:26.554 Admin Command Set Attributes 00:28:26.554 ============================ 00:28:26.554 Security Send/Receive: Not Supported 00:28:26.554 Format NVM: Not Supported 00:28:26.554 Firmware Activate/Download: Not Supported 00:28:26.554 Namespace Management: Not Supported 00:28:26.554 Device Self-Test: Not Supported 00:28:26.554 Directives: Not Supported 00:28:26.554 NVMe-MI: Not Supported 00:28:26.554 Virtualization Management: Not Supported 00:28:26.554 Doorbell Buffer Config: Not Supported 00:28:26.554 Get LBA Status Capability: Not Supported 00:28:26.554 Command & Feature Lockdown Capability: Not Supported 00:28:26.554 Abort Command Limit: 4 00:28:26.554 Async Event Request Limit: 4 00:28:26.554 Number of Firmware Slots: N/A 00:28:26.554 Firmware Slot 1 Read-Only: N/A 00:28:26.554 Firmware Activation Without Reset: N/A 00:28:26.554 Multiple Update Detection Support: N/A 00:28:26.554 Firmware Update Granularity: No Information Provided 00:28:26.554 Per-Namespace SMART Log: No 00:28:26.554 Asymmetric Namespace Access Log Page: Not Supported 00:28:26.554 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:26.554 Command Effects Log Page: Supported 00:28:26.554 Get Log Page Extended Data: Supported 00:28:26.554 Telemetry Log Pages: Not Supported 00:28:26.554 Persistent Event Log Pages: Not Supported 00:28:26.554 Supported Log Pages Log Page: May Support 00:28:26.554 Commands Supported & Effects Log Page: Not Supported 00:28:26.554 Feature Identifiers & Effects Log Page:May Support 00:28:26.554 NVMe-MI Commands & Effects Log Page: May Support 00:28:26.554 Data Area 4 for Telemetry Log: Not Supported 00:28:26.555 Error Log Page Entries Supported: 128 00:28:26.555 Keep Alive: Supported 00:28:26.555 Keep Alive Granularity: 10000 ms 00:28:26.555 00:28:26.555 NVM Command Set Attributes 00:28:26.555 ========================== 00:28:26.555 Submission Queue Entry Size 00:28:26.555 Max: 64 00:28:26.555 Min: 64 00:28:26.555 Completion Queue Entry Size 00:28:26.555 Max: 16 00:28:26.555 Min: 16 00:28:26.555 Number of Namespaces: 32 00:28:26.555 Compare Command: Supported 00:28:26.555 Write Uncorrectable Command: Not Supported 00:28:26.555 Dataset Management Command: Supported 00:28:26.555 Write Zeroes Command: Supported 00:28:26.555 Set Features Save Field: Not Supported 00:28:26.555 Reservations: Supported 00:28:26.555 Timestamp: Not Supported 00:28:26.555 Copy: Supported 00:28:26.555 Volatile Write Cache: Present 00:28:26.555 Atomic Write Unit (Normal): 1 00:28:26.555 Atomic Write Unit (PFail): 1 00:28:26.555 Atomic Compare & Write Unit: 1 00:28:26.555 Fused Compare & Write: Supported 00:28:26.555 Scatter-Gather List 00:28:26.555 SGL Command Set: Supported 00:28:26.555 SGL Keyed: Supported 00:28:26.555 SGL Bit Bucket Descriptor: Not Supported 00:28:26.555 SGL Metadata Pointer: Not Supported 00:28:26.555 Oversized SGL: Not Supported 00:28:26.555 SGL Metadata Address: Not Supported 00:28:26.555 SGL Offset: Supported 00:28:26.555 Transport SGL Data Block: Not Supported 00:28:26.555 Replay Protected Memory Block: Not Supported 00:28:26.555 00:28:26.555 Firmware Slot Information 00:28:26.555 ========================= 00:28:26.555 Active slot: 1 00:28:26.555 Slot 1 Firmware Revision: 24.05.1 00:28:26.555 00:28:26.555 00:28:26.555 Commands Supported and Effects 00:28:26.555 ============================== 00:28:26.555 Admin Commands 00:28:26.555 -------------- 00:28:26.555 Get Log Page (02h): Supported 00:28:26.555 Identify (06h): Supported 00:28:26.555 Abort (08h): Supported 00:28:26.555 Set Features (09h): Supported 00:28:26.555 Get Features (0Ah): Supported 00:28:26.555 Asynchronous Event Request (0Ch): Supported 00:28:26.555 Keep Alive (18h): Supported 00:28:26.555 I/O Commands 00:28:26.555 ------------ 00:28:26.555 Flush (00h): Supported LBA-Change 00:28:26.555 Write (01h): Supported LBA-Change 00:28:26.555 Read (02h): Supported 00:28:26.555 Compare (05h): Supported 00:28:26.555 Write Zeroes (08h): Supported LBA-Change 00:28:26.555 Dataset Management (09h): Supported LBA-Change 00:28:26.555 Copy (19h): Supported LBA-Change 00:28:26.555 Unknown (79h): Supported LBA-Change 00:28:26.555 Unknown (7Ah): Supported 00:28:26.555 00:28:26.555 Error Log 00:28:26.555 ========= 00:28:26.555 00:28:26.555 Arbitration 00:28:26.555 =========== 00:28:26.555 Arbitration Burst: 1 00:28:26.555 00:28:26.555 Power Management 00:28:26.555 ================ 00:28:26.555 Number of Power States: 1 00:28:26.555 Current Power State: Power State #0 00:28:26.555 Power State #0: 00:28:26.555 Max Power: 0.00 W 00:28:26.555 Non-Operational State: Operational 00:28:26.555 Entry Latency: Not Reported 00:28:26.555 Exit Latency: Not Reported 00:28:26.555 Relative Read Throughput: 0 00:28:26.555 Relative Read Latency: 0 00:28:26.555 Relative Write Throughput: 0 00:28:26.555 Relative Write Latency: 0 00:28:26.555 Idle Power: Not Reported 00:28:26.555 Active Power: Not Reported 00:28:26.555 Non-Operational Permissive Mode: Not Supported 00:28:26.555 00:28:26.555 Health Information 00:28:26.555 ================== 00:28:26.555 Critical Warnings: 00:28:26.555 Available Spare Space: OK 00:28:26.555 Temperature: OK 00:28:26.555 Device Reliability: OK 00:28:26.555 Read Only: No 00:28:26.555 Volatile Memory Backup: OK 00:28:26.555 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:26.555 Temperature Threshold: [2024-07-14 14:04:04.336707] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.555 [2024-07-14 14:04:04.336719] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19b6980) 00:28:26.555 [2024-07-14 14:04:04.336730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.555 [2024-07-14 14:04:04.336751] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ee60, cid 7, qid 0 00:28:26.555 [2024-07-14 14:04:04.336889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.555 [2024-07-14 14:04:04.336904] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.555 [2024-07-14 14:04:04.336911] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.555 [2024-07-14 14:04:04.336918] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1ee60) on tqpair=0x19b6980 00:28:26.555 [2024-07-14 14:04:04.336959] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:26.555 [2024-07-14 14:04:04.336981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.555 [2024-07-14 14:04:04.336993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.555 [2024-07-14 14:04:04.337002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.555 [2024-07-14 14:04:04.337012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.555 [2024-07-14 14:04:04.337024] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.555 [2024-07-14 14:04:04.337032] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.555 [2024-07-14 14:04:04.337039] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.555 [2024-07-14 14:04:04.337049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.555 [2024-07-14 14:04:04.337075] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.555 [2024-07-14 14:04:04.337163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.555 [2024-07-14 14:04:04.337175] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.555 [2024-07-14 14:04:04.337182] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.555 [2024-07-14 14:04:04.337189] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.555 [2024-07-14 14:04:04.337201] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.555 [2024-07-14 14:04:04.337209] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.555 [2024-07-14 14:04:04.337226] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.555 [2024-07-14 14:04:04.337236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.555 [2024-07-14 14:04:04.337261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.555 [2024-07-14 14:04:04.337362] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.555 [2024-07-14 14:04:04.337376] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.555 [2024-07-14 14:04:04.337383] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.555 [2024-07-14 14:04:04.337390] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.556 [2024-07-14 14:04:04.337399] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:26.556 [2024-07-14 14:04:04.337407] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:26.556 [2024-07-14 14:04:04.337423] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.337432] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.337439] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.556 [2024-07-14 14:04:04.337449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.556 [2024-07-14 14:04:04.337470] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.556 [2024-07-14 14:04:04.337553] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.556 [2024-07-14 14:04:04.337566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.556 [2024-07-14 14:04:04.337573] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.337579] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.556 [2024-07-14 14:04:04.337596] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.337606] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.337612] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.556 [2024-07-14 14:04:04.337623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.556 [2024-07-14 14:04:04.337651] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.556 [2024-07-14 14:04:04.337732] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.556 [2024-07-14 14:04:04.337746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.556 [2024-07-14 14:04:04.337754] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.337760] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.556 [2024-07-14 14:04:04.337778] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.337787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.337794] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.556 [2024-07-14 14:04:04.337808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.556 [2024-07-14 14:04:04.337829] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.556 [2024-07-14 14:04:04.337919] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.556 [2024-07-14 14:04:04.337933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.556 [2024-07-14 14:04:04.337940] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.337947] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.556 [2024-07-14 14:04:04.337964] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.337974] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.337980] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.556 [2024-07-14 14:04:04.337991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.556 [2024-07-14 14:04:04.338011] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.556 [2024-07-14 14:04:04.338089] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.556 [2024-07-14 14:04:04.338103] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.556 [2024-07-14 14:04:04.338110] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338117] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.556 [2024-07-14 14:04:04.338135] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338150] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.556 [2024-07-14 14:04:04.338161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.556 [2024-07-14 14:04:04.338181] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.556 [2024-07-14 14:04:04.338266] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.556 [2024-07-14 14:04:04.338278] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.556 [2024-07-14 14:04:04.338285] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338292] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.556 [2024-07-14 14:04:04.338309] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338319] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338325] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.556 [2024-07-14 14:04:04.338336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.556 [2024-07-14 14:04:04.338355] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.556 [2024-07-14 14:04:04.338435] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.556 [2024-07-14 14:04:04.338449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.556 [2024-07-14 14:04:04.338456] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338463] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.556 [2024-07-14 14:04:04.338481] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338490] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338497] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.556 [2024-07-14 14:04:04.338510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.556 [2024-07-14 14:04:04.338532] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.556 [2024-07-14 14:04:04.338612] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.556 [2024-07-14 14:04:04.338626] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.556 [2024-07-14 14:04:04.338633] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338640] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.556 [2024-07-14 14:04:04.338657] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338667] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338673] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.556 [2024-07-14 14:04:04.338684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.556 [2024-07-14 14:04:04.338704] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.556 [2024-07-14 14:04:04.338777] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.556 [2024-07-14 14:04:04.338789] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.556 [2024-07-14 14:04:04.338796] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338803] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.556 [2024-07-14 14:04:04.338820] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338829] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.338836] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.556 [2024-07-14 14:04:04.338846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.556 [2024-07-14 14:04:04.338866] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.556 [2024-07-14 14:04:04.342890] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.556 [2024-07-14 14:04:04.342907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.556 [2024-07-14 14:04:04.342914] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.342921] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.556 [2024-07-14 14:04:04.342940] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.342949] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:26.556 [2024-07-14 14:04:04.342956] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6980) 00:28:26.556 [2024-07-14 14:04:04.342967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.556 [2024-07-14 14:04:04.342989] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1e8e0, cid 3, qid 0 00:28:26.557 [2024-07-14 14:04:04.343083] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:26.557 [2024-07-14 14:04:04.343098] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:26.557 [2024-07-14 14:04:04.343105] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:26.557 [2024-07-14 14:04:04.343111] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a1e8e0) on tqpair=0x19b6980 00:28:26.557 [2024-07-14 14:04:04.343125] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:28:26.557 0 Kelvin (-273 Celsius) 00:28:26.557 Available Spare: 0% 00:28:26.557 Available Spare Threshold: 0% 00:28:26.557 Life Percentage Used: 0% 00:28:26.557 Data Units Read: 0 00:28:26.557 Data Units Written: 0 00:28:26.557 Host Read Commands: 0 00:28:26.557 Host Write Commands: 0 00:28:26.557 Controller Busy Time: 0 minutes 00:28:26.557 Power Cycles: 0 00:28:26.557 Power On Hours: 0 hours 00:28:26.557 Unsafe Shutdowns: 0 00:28:26.557 Unrecoverable Media Errors: 0 00:28:26.557 Lifetime Error Log Entries: 0 00:28:26.557 Warning Temperature Time: 0 minutes 00:28:26.557 Critical Temperature Time: 0 minutes 00:28:26.557 00:28:26.557 Number of Queues 00:28:26.557 ================ 00:28:26.557 Number of I/O Submission Queues: 127 00:28:26.557 Number of I/O Completion Queues: 127 00:28:26.557 00:28:26.557 Active Namespaces 00:28:26.557 ================= 00:28:26.557 Namespace ID:1 00:28:26.557 Error Recovery Timeout: Unlimited 00:28:26.557 Command Set Identifier: NVM (00h) 00:28:26.557 Deallocate: Supported 00:28:26.557 Deallocated/Unwritten Error: Not Supported 00:28:26.557 Deallocated Read Value: Unknown 00:28:26.557 Deallocate in Write Zeroes: Not Supported 00:28:26.557 Deallocated Guard Field: 0xFFFF 00:28:26.557 Flush: Supported 00:28:26.557 Reservation: Supported 00:28:26.557 Namespace Sharing Capabilities: Multiple Controllers 00:28:26.557 Size (in LBAs): 131072 (0GiB) 00:28:26.557 Capacity (in LBAs): 131072 (0GiB) 00:28:26.557 Utilization (in LBAs): 131072 (0GiB) 00:28:26.557 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:26.557 EUI64: ABCDEF0123456789 00:28:26.557 UUID: 0a48d8cd-0b2b-4bfb-8007-d1798d2e3a1f 00:28:26.557 Thin Provisioning: Not Supported 00:28:26.557 Per-NS Atomic Units: Yes 00:28:26.557 Atomic Boundary Size (Normal): 0 00:28:26.557 Atomic Boundary Size (PFail): 0 00:28:26.557 Atomic Boundary Offset: 0 00:28:26.557 Maximum Single Source Range Length: 65535 00:28:26.557 Maximum Copy Length: 65535 00:28:26.557 Maximum Source Range Count: 1 00:28:26.557 NGUID/EUI64 Never Reused: No 00:28:26.557 Namespace Write Protected: No 00:28:26.557 Number of LBA Formats: 1 00:28:26.557 Current LBA Format: LBA Format #00 00:28:26.557 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:26.557 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:26.557 rmmod nvme_tcp 00:28:26.557 rmmod nvme_fabrics 00:28:26.557 rmmod nvme_keyring 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1550299 ']' 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1550299 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 1550299 ']' 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 1550299 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1550299 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1550299' 00:28:26.557 killing process with pid 1550299 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 1550299 00:28:26.557 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 1550299 00:28:26.814 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:26.814 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:26.814 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:26.814 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:26.814 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:26.814 14:04:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.814 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.814 14:04:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.344 14:04:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:29.344 00:28:29.344 real 0m5.253s 00:28:29.344 user 0m4.077s 00:28:29.344 sys 0m1.808s 00:28:29.344 14:04:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:29.344 14:04:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:29.344 ************************************ 00:28:29.344 END TEST nvmf_identify 00:28:29.344 ************************************ 00:28:29.344 14:04:06 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:29.344 14:04:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:29.344 14:04:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:29.344 14:04:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:29.344 ************************************ 00:28:29.344 START TEST nvmf_perf 00:28:29.344 ************************************ 00:28:29.344 14:04:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:29.344 * Looking for test storage... 00:28:29.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:29.344 14:04:06 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.344 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:29.345 14:04:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:31.240 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:31.240 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:31.240 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:31.241 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:31.241 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:31.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:28:31.241 00:28:31.241 --- 10.0.0.2 ping statistics --- 00:28:31.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.241 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:28:31.241 14:04:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:28:31.241 00:28:31.241 --- 10.0.0.1 ping statistics --- 00:28:31.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.241 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1552277 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1552277 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 1552277 ']' 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:31.241 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:31.241 [2024-07-14 14:04:09.080822] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:31.241 [2024-07-14 14:04:09.080938] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.241 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.241 [2024-07-14 14:04:09.147523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:31.498 [2024-07-14 14:04:09.235619] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.498 [2024-07-14 14:04:09.235679] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.498 [2024-07-14 14:04:09.235692] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.498 [2024-07-14 14:04:09.235703] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.498 [2024-07-14 14:04:09.235727] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.498 [2024-07-14 14:04:09.235820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.498 [2024-07-14 14:04:09.235893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.498 [2024-07-14 14:04:09.235932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:31.498 [2024-07-14 14:04:09.235934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.498 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:31.498 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:31.498 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:31.498 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.498 14:04:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:31.498 14:04:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.498 14:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:31.498 14:04:09 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:34.773 14:04:12 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:34.773 14:04:12 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:35.031 14:04:12 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:35.031 14:04:12 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:35.288 14:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:35.288 14:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:35.288 14:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:35.288 14:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:35.288 14:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:35.288 [2024-07-14 14:04:13.258309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.545 14:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.801 14:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:35.801 14:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:36.058 14:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:36.058 14:04:13 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:36.314 14:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.571 [2024-07-14 14:04:14.300754] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.571 14:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:36.829 14:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:36.829 14:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:36.829 14:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:36.829 14:04:14 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:38.203 Initializing NVMe Controllers 00:28:38.203 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:38.203 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:38.203 Initialization complete. Launching workers. 00:28:38.203 ======================================================== 00:28:38.203 Latency(us) 00:28:38.203 Device Information : IOPS MiB/s Average min max 00:28:38.203 PCIE (0000:88:00.0) NSID 1 from core 0: 85821.46 335.24 372.27 32.87 7256.13 00:28:38.203 ======================================================== 00:28:38.203 Total : 85821.46 335.24 372.27 32.87 7256.13 00:28:38.203 00:28:38.203 14:04:15 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:38.203 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.191 Initializing NVMe Controllers 00:28:39.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:39.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:39.191 Initialization complete. Launching workers. 00:28:39.191 ======================================================== 00:28:39.191 Latency(us) 00:28:39.191 Device Information : IOPS MiB/s Average min max 00:28:39.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 112.61 0.44 8880.34 139.53 45781.99 00:28:39.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 70.75 0.28 14695.32 6970.57 47899.74 00:28:39.191 ======================================================== 00:28:39.191 Total : 183.36 0.72 11124.16 139.53 47899.74 00:28:39.191 00:28:39.191 14:04:17 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:39.191 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.563 Initializing NVMe Controllers 00:28:40.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:40.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:40.563 Initialization complete. Launching workers. 00:28:40.563 ======================================================== 00:28:40.563 Latency(us) 00:28:40.563 Device Information : IOPS MiB/s Average min max 00:28:40.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8157.38 31.86 3925.32 608.90 9435.89 00:28:40.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3816.27 14.91 8413.28 5451.81 16610.07 00:28:40.563 ======================================================== 00:28:40.563 Total : 11973.64 46.77 5355.74 608.90 16610.07 00:28:40.563 00:28:40.563 14:04:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:40.563 14:04:18 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:40.563 14:04:18 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:40.563 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.093 Initializing NVMe Controllers 00:28:43.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:43.093 Controller IO queue size 128, less than required. 00:28:43.093 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.093 Controller IO queue size 128, less than required. 00:28:43.093 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:43.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:43.093 Initialization complete. Launching workers. 00:28:43.093 ======================================================== 00:28:43.093 Latency(us) 00:28:43.093 Device Information : IOPS MiB/s Average min max 00:28:43.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1703.11 425.78 76135.53 53999.55 144639.11 00:28:43.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.12 151.28 225409.90 108419.07 337158.90 00:28:43.093 ======================================================== 00:28:43.093 Total : 2308.23 577.06 115268.84 53999.55 337158.90 00:28:43.093 00:28:43.351 14:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:43.351 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.351 No valid NVMe controllers or AIO or URING devices found 00:28:43.351 Initializing NVMe Controllers 00:28:43.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:43.351 Controller IO queue size 128, less than required. 00:28:43.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.351 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:43.351 Controller IO queue size 128, less than required. 00:28:43.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:43.351 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:43.351 WARNING: Some requested NVMe devices were skipped 00:28:43.351 14:04:21 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:43.351 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.875 Initializing NVMe Controllers 00:28:45.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.875 Controller IO queue size 128, less than required. 00:28:45.875 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.875 Controller IO queue size 128, less than required. 00:28:45.875 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:45.875 Initialization complete. Launching workers. 00:28:45.875 00:28:45.875 ==================== 00:28:45.875 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:45.875 TCP transport: 00:28:45.875 polls: 9996 00:28:45.875 idle_polls: 6134 00:28:45.875 sock_completions: 3862 00:28:45.875 nvme_completions: 6277 00:28:45.875 submitted_requests: 9438 00:28:45.875 queued_requests: 1 00:28:45.875 00:28:45.875 ==================== 00:28:45.875 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:45.875 TCP transport: 00:28:45.875 polls: 10279 00:28:45.875 idle_polls: 7280 00:28:45.875 sock_completions: 2999 00:28:45.875 nvme_completions: 5475 00:28:45.875 submitted_requests: 8280 00:28:45.875 queued_requests: 1 00:28:45.875 ======================================================== 00:28:45.875 Latency(us) 00:28:45.875 Device Information : IOPS MiB/s Average min max 00:28:45.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1566.82 391.70 83838.02 47742.77 144641.43 00:28:45.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1366.59 341.65 94196.54 46293.66 152009.17 00:28:45.875 ======================================================== 00:28:45.875 Total : 2933.41 733.35 88663.77 46293.66 152009.17 00:28:45.875 00:28:45.875 14:04:23 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:45.875 14:04:23 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:46.133 14:04:24 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:46.133 14:04:24 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:46.133 14:04:24 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:49.407 14:04:27 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=c3a0b719-84b8-4e66-a27b-44aad265544a 00:28:49.407 14:04:27 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb c3a0b719-84b8-4e66-a27b-44aad265544a 00:28:49.407 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=c3a0b719-84b8-4e66-a27b-44aad265544a 00:28:49.407 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:49.407 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:49.407 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:49.407 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:49.664 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:49.664 { 00:28:49.665 "uuid": "c3a0b719-84b8-4e66-a27b-44aad265544a", 00:28:49.665 "name": "lvs_0", 00:28:49.665 "base_bdev": "Nvme0n1", 00:28:49.665 "total_data_clusters": 238234, 00:28:49.665 "free_clusters": 238234, 00:28:49.665 "block_size": 512, 00:28:49.665 "cluster_size": 4194304 00:28:49.665 } 00:28:49.665 ]' 00:28:49.665 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="c3a0b719-84b8-4e66-a27b-44aad265544a") .free_clusters' 00:28:49.665 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:49.665 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="c3a0b719-84b8-4e66-a27b-44aad265544a") .cluster_size' 00:28:49.665 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:49.665 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:49.665 14:04:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:49.665 952936 00:28:49.665 14:04:27 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:49.665 14:04:27 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:49.665 14:04:27 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c3a0b719-84b8-4e66-a27b-44aad265544a lbd_0 20480 00:28:50.229 14:04:28 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=eeeb69a3-b21d-4d50-b813-c9eb3f273a26 00:28:50.229 14:04:28 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore eeeb69a3-b21d-4d50-b813-c9eb3f273a26 lvs_n_0 00:28:51.161 14:04:28 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=fbd660ca-62fc-451e-9208-97b3f4ce56f7 00:28:51.161 14:04:28 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb fbd660ca-62fc-451e-9208-97b3f4ce56f7 00:28:51.161 14:04:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=fbd660ca-62fc-451e-9208-97b3f4ce56f7 00:28:51.161 14:04:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:51.161 14:04:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:51.161 14:04:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:51.161 14:04:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:51.440 14:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:51.440 { 00:28:51.440 "uuid": "c3a0b719-84b8-4e66-a27b-44aad265544a", 00:28:51.440 "name": "lvs_0", 00:28:51.440 "base_bdev": "Nvme0n1", 00:28:51.440 "total_data_clusters": 238234, 00:28:51.440 "free_clusters": 233114, 00:28:51.440 "block_size": 512, 00:28:51.440 "cluster_size": 4194304 00:28:51.440 }, 00:28:51.440 { 00:28:51.440 "uuid": "fbd660ca-62fc-451e-9208-97b3f4ce56f7", 00:28:51.440 "name": "lvs_n_0", 00:28:51.440 "base_bdev": "eeeb69a3-b21d-4d50-b813-c9eb3f273a26", 00:28:51.440 "total_data_clusters": 5114, 00:28:51.440 "free_clusters": 5114, 00:28:51.440 "block_size": 512, 00:28:51.440 "cluster_size": 4194304 00:28:51.440 } 00:28:51.440 ]' 00:28:51.440 14:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="fbd660ca-62fc-451e-9208-97b3f4ce56f7") .free_clusters' 00:28:51.440 14:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:51.440 14:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="fbd660ca-62fc-451e-9208-97b3f4ce56f7") .cluster_size' 00:28:51.440 14:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:51.440 14:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:51.440 14:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:51.440 20456 00:28:51.440 14:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:51.440 14:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fbd660ca-62fc-451e-9208-97b3f4ce56f7 lbd_nest_0 20456 00:28:51.697 14:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=337beed6-a44b-4621-a823-6791d217db67 00:28:51.697 14:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.956 14:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:51.956 14:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 337beed6-a44b-4621-a823-6791d217db67 00:28:52.215 14:04:30 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.472 14:04:30 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:52.472 14:04:30 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:52.472 14:04:30 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:52.472 14:04:30 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:52.472 14:04:30 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:52.472 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.657 Initializing NVMe Controllers 00:29:04.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:04.657 Initialization complete. Launching workers. 00:29:04.657 ======================================================== 00:29:04.657 Latency(us) 00:29:04.657 Device Information : IOPS MiB/s Average min max 00:29:04.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.50 0.02 22038.12 168.50 46729.00 00:29:04.657 ======================================================== 00:29:04.657 Total : 45.50 0.02 22038.12 168.50 46729.00 00:29:04.657 00:29:04.657 14:04:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:04.657 14:04:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.657 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.643 Initializing NVMe Controllers 00:29:14.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:14.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:14.643 Initialization complete. Launching workers. 00:29:14.643 ======================================================== 00:29:14.643 Latency(us) 00:29:14.643 Device Information : IOPS MiB/s Average min max 00:29:14.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.80 10.10 12391.10 5059.67 47899.96 00:29:14.643 ======================================================== 00:29:14.643 Total : 80.80 10.10 12391.10 5059.67 47899.96 00:29:14.643 00:29:14.643 14:04:51 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:14.643 14:04:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:14.643 14:04:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:14.643 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.601 Initializing NVMe Controllers 00:29:24.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:24.601 Initialization complete. Launching workers. 00:29:24.601 ======================================================== 00:29:24.601 Latency(us) 00:29:24.601 Device Information : IOPS MiB/s Average min max 00:29:24.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7530.00 3.68 4249.45 281.12 11167.56 00:29:24.601 ======================================================== 00:29:24.601 Total : 7530.00 3.68 4249.45 281.12 11167.56 00:29:24.601 00:29:24.601 14:05:01 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:24.601 14:05:01 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:24.601 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.562 Initializing NVMe Controllers 00:29:34.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.562 Initialization complete. Launching workers. 00:29:34.562 ======================================================== 00:29:34.562 Latency(us) 00:29:34.562 Device Information : IOPS MiB/s Average min max 00:29:34.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3993.56 499.19 8015.18 730.58 20243.52 00:29:34.562 ======================================================== 00:29:34.562 Total : 3993.56 499.19 8015.18 730.58 20243.52 00:29:34.562 00:29:34.562 14:05:11 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:34.562 14:05:11 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:34.562 14:05:11 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.562 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.520 Initializing NVMe Controllers 00:29:44.520 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.520 Controller IO queue size 128, less than required. 00:29:44.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:44.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:44.520 Initialization complete. Launching workers. 00:29:44.520 ======================================================== 00:29:44.520 Latency(us) 00:29:44.520 Device Information : IOPS MiB/s Average min max 00:29:44.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11936.20 5.83 10727.41 1861.12 23933.19 00:29:44.520 ======================================================== 00:29:44.520 Total : 11936.20 5.83 10727.41 1861.12 23933.19 00:29:44.520 00:29:44.520 14:05:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:44.520 14:05:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:44.520 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.712 Initializing NVMe Controllers 00:29:56.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:56.712 Controller IO queue size 128, less than required. 00:29:56.712 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:56.712 Initialization complete. Launching workers. 00:29:56.712 ======================================================== 00:29:56.712 Latency(us) 00:29:56.712 Device Information : IOPS MiB/s Average min max 00:29:56.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1178.91 147.36 108845.88 16096.37 198469.91 00:29:56.712 ======================================================== 00:29:56.712 Total : 1178.91 147.36 108845.88 16096.37 198469.91 00:29:56.712 00:29:56.712 14:05:32 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.712 14:05:32 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 337beed6-a44b-4621-a823-6791d217db67 00:29:56.712 14:05:33 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:56.712 14:05:33 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eeeb69a3-b21d-4d50-b813-c9eb3f273a26 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:56.712 rmmod nvme_tcp 00:29:56.712 rmmod nvme_fabrics 00:29:56.712 rmmod nvme_keyring 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1552277 ']' 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1552277 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 1552277 ']' 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 1552277 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:56.712 14:05:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1552277 00:29:56.713 14:05:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:56.713 14:05:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:56.713 14:05:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1552277' 00:29:56.713 killing process with pid 1552277 00:29:56.713 14:05:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 1552277 00:29:56.713 14:05:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 1552277 00:29:58.609 14:05:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:58.609 14:05:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:58.609 14:05:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:58.609 14:05:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:58.609 14:05:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:58.609 14:05:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.609 14:05:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:58.609 14:05:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.568 14:05:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:00.568 00:30:00.568 real 1m31.349s 00:30:00.568 user 5m34.750s 00:30:00.568 sys 0m17.031s 00:30:00.568 14:05:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:00.568 14:05:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:00.568 ************************************ 00:30:00.568 END TEST nvmf_perf 00:30:00.568 ************************************ 00:30:00.568 14:05:38 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:00.568 14:05:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:00.568 14:05:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:00.568 14:05:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.568 ************************************ 00:30:00.568 START TEST nvmf_fio_host 00:30:00.568 ************************************ 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:00.568 * Looking for test storage... 00:30:00.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.568 14:05:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:00.569 14:05:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:02.469 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:02.469 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:02.469 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:02.469 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:02.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:30:02.469 00:30:02.469 --- 10.0.0.2 ping statistics --- 00:30:02.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.469 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:30:02.469 00:30:02.469 --- 10.0.0.1 ping statistics --- 00:30:02.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.469 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1564341 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:02.469 14:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1564341 00:30:02.470 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 1564341 ']' 00:30:02.470 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.470 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:02.470 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.470 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:02.470 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.470 [2024-07-14 14:05:40.313539] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:02.470 [2024-07-14 14:05:40.313635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.470 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.470 [2024-07-14 14:05:40.388508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:02.728 [2024-07-14 14:05:40.480096] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.728 [2024-07-14 14:05:40.480155] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.728 [2024-07-14 14:05:40.480179] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.728 [2024-07-14 14:05:40.480192] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.728 [2024-07-14 14:05:40.480205] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.728 [2024-07-14 14:05:40.480271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.728 [2024-07-14 14:05:40.480329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.728 [2024-07-14 14:05:40.480373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.728 [2024-07-14 14:05:40.480373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.728 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:02.728 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:30:02.728 14:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:02.986 [2024-07-14 14:05:40.836146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.986 14:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:02.986 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:02.986 14:05:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.986 14:05:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:03.244 Malloc1 00:30:03.244 14:05:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.502 14:05:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:03.761 14:05:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.019 [2024-07-14 14:05:41.883701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.019 14:05:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:04.277 14:05:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:04.535 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:04.535 fio-3.35 00:30:04.535 Starting 1 thread 00:30:04.535 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.061 00:30:07.061 test: (groupid=0, jobs=1): err= 0: pid=1564713: Sun Jul 14 14:05:44 2024 00:30:07.061 read: IOPS=9056, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2007msec) 00:30:07.061 slat (nsec): min=1824, max=164274, avg=2438.48, stdev=1820.95 00:30:07.061 clat (usec): min=2484, max=12877, avg=7721.17, stdev=644.52 00:30:07.061 lat (usec): min=2511, max=12879, avg=7723.61, stdev=644.41 00:30:07.061 clat percentiles (usec): 00:30:07.061 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:30:07.061 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7898], 00:30:07.061 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:30:07.061 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11207], 99.95th=[12256], 00:30:07.061 | 99.99th=[12911] 00:30:07.061 bw ( KiB/s): min=35416, max=36648, per=99.93%, avg=36200.00, stdev=539.71, samples=4 00:30:07.061 iops : min= 8854, max= 9162, avg=9050.00, stdev=134.93, samples=4 00:30:07.061 write: IOPS=9067, BW=35.4MiB/s (37.1MB/s)(71.1MiB/2007msec); 0 zone resets 00:30:07.061 slat (nsec): min=1966, max=129546, avg=2533.24, stdev=1399.96 00:30:07.061 clat (usec): min=1381, max=12669, avg=6351.09, stdev=520.65 00:30:07.061 lat (usec): min=1390, max=12679, avg=6353.62, stdev=520.60 00:30:07.061 clat percentiles (usec): 00:30:07.061 | 1.00th=[ 5145], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5932], 00:30:07.061 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6456], 00:30:07.061 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:30:07.061 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[10290], 99.95th=[11207], 00:30:07.061 | 99.99th=[12649] 00:30:07.061 bw ( KiB/s): min=35968, max=36616, per=100.00%, avg=36294.00, stdev=304.24, samples=4 00:30:07.061 iops : min= 8992, max= 9154, avg=9073.50, stdev=76.06, samples=4 00:30:07.061 lat (msec) : 2=0.03%, 4=0.11%, 10=99.71%, 20=0.15% 00:30:07.061 cpu : usr=64.21%, sys=33.70%, ctx=100, majf=0, minf=6 00:30:07.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:07.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:07.061 issued rwts: total=18176,18198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:07.061 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:07.061 00:30:07.061 Run status group 0 (all jobs): 00:30:07.061 READ: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.4MB), run=2007-2007msec 00:30:07.061 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.1MiB (74.5MB), run=2007-2007msec 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:07.061 14:05:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:07.061 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:07.061 fio-3.35 00:30:07.061 Starting 1 thread 00:30:07.061 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.588 00:30:09.588 test: (groupid=0, jobs=1): err= 0: pid=1565049: Sun Jul 14 14:05:47 2024 00:30:09.588 read: IOPS=8504, BW=133MiB/s (139MB/s)(267MiB/2006msec) 00:30:09.588 slat (nsec): min=2807, max=99782, avg=3670.90, stdev=1699.65 00:30:09.588 clat (usec): min=2154, max=15625, avg=8566.92, stdev=2065.68 00:30:09.588 lat (usec): min=2158, max=15629, avg=8570.59, stdev=2065.69 00:30:09.588 clat percentiles (usec): 00:30:09.588 | 1.00th=[ 4490], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6783], 00:30:09.588 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 8979], 00:30:09.588 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[11207], 95.00th=[12256], 00:30:09.588 | 99.00th=[13960], 99.50th=[14353], 99.90th=[14746], 99.95th=[14877], 00:30:09.588 | 99.99th=[15533] 00:30:09.588 bw ( KiB/s): min=64672, max=76544, per=52.08%, avg=70872.00, stdev=5911.34, samples=4 00:30:09.588 iops : min= 4042, max= 4784, avg=4429.50, stdev=369.46, samples=4 00:30:09.588 write: IOPS=4968, BW=77.6MiB/s (81.4MB/s)(144MiB/1858msec); 0 zone resets 00:30:09.588 slat (usec): min=30, max=156, avg=33.58, stdev= 4.73 00:30:09.588 clat (usec): min=6234, max=19702, avg=11174.14, stdev=1932.37 00:30:09.588 lat (usec): min=6266, max=19735, avg=11207.73, stdev=1932.35 00:30:09.588 clat percentiles (usec): 00:30:09.588 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9503], 00:30:09.588 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10945], 60.00th=[11600], 00:30:09.588 | 70.00th=[12125], 80.00th=[12780], 90.00th=[13829], 95.00th=[14484], 00:30:09.588 | 99.00th=[16450], 99.50th=[17695], 99.90th=[18744], 99.95th=[19006], 00:30:09.588 | 99.99th=[19792] 00:30:09.588 bw ( KiB/s): min=66944, max=79872, per=92.90%, avg=73856.00, stdev=6108.12, samples=4 00:30:09.588 iops : min= 4184, max= 4992, avg=4616.00, stdev=381.76, samples=4 00:30:09.588 lat (msec) : 4=0.27%, 10=60.19%, 20=39.54% 00:30:09.588 cpu : usr=76.87%, sys=21.39%, ctx=43, majf=0, minf=2 00:30:09.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:30:09.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:09.588 issued rwts: total=17060,9232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:09.588 00:30:09.588 Run status group 0 (all jobs): 00:30:09.588 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=267MiB (280MB), run=2006-2006msec 00:30:09.588 WRITE: bw=77.6MiB/s (81.4MB/s), 77.6MiB/s-77.6MiB/s (81.4MB/s-81.4MB/s), io=144MiB (151MB), run=1858-1858msec 00:30:09.588 14:05:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:09.846 14:05:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:09.846 14:05:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:09.846 14:05:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:09.846 14:05:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:09.846 14:05:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:09.846 14:05:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:09.846 14:05:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:09.846 14:05:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:09.846 14:05:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:09.846 14:05:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:30:09.846 14:05:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:13.130 Nvme0n1 00:30:13.130 14:05:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=c5dbad4a-2a71-4510-a893-dc22cb1edf18 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb c5dbad4a-2a71-4510-a893-dc22cb1edf18 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=c5dbad4a-2a71-4510-a893-dc22cb1edf18 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:16.406 { 00:30:16.406 "uuid": "c5dbad4a-2a71-4510-a893-dc22cb1edf18", 00:30:16.406 "name": "lvs_0", 00:30:16.406 "base_bdev": "Nvme0n1", 00:30:16.406 "total_data_clusters": 930, 00:30:16.406 "free_clusters": 930, 00:30:16.406 "block_size": 512, 00:30:16.406 "cluster_size": 1073741824 00:30:16.406 } 00:30:16.406 ]' 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="c5dbad4a-2a71-4510-a893-dc22cb1edf18") .free_clusters' 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="c5dbad4a-2a71-4510-a893-dc22cb1edf18") .cluster_size' 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:16.406 952320 00:30:16.406 14:05:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:16.406 d8dff8e9-e303-43fa-a03e-c44e0a99e3e2 00:30:16.406 14:05:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:16.662 14:05:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:16.919 14:05:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:17.177 14:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:17.435 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:17.435 fio-3.35 00:30:17.435 Starting 1 thread 00:30:17.435 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.961 00:30:19.961 test: (groupid=0, jobs=1): err= 0: pid=1566333: Sun Jul 14 14:05:57 2024 00:30:19.961 read: IOPS=6065, BW=23.7MiB/s (24.8MB/s)(47.6MiB/2008msec) 00:30:19.961 slat (usec): min=2, max=147, avg= 2.72, stdev= 2.00 00:30:19.961 clat (usec): min=1019, max=171109, avg=11572.65, stdev=11589.83 00:30:19.961 lat (usec): min=1022, max=171146, avg=11575.36, stdev=11590.09 00:30:19.961 clat percentiles (msec): 00:30:19.961 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10], 00:30:19.961 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:30:19.961 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:19.961 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:19.961 | 99.99th=[ 171] 00:30:19.961 bw ( KiB/s): min=16928, max=26752, per=99.78%, avg=24210.00, stdev=4855.53, samples=4 00:30:19.961 iops : min= 4232, max= 6688, avg=6052.50, stdev=1213.88, samples=4 00:30:19.961 write: IOPS=6044, BW=23.6MiB/s (24.8MB/s)(47.4MiB/2008msec); 0 zone resets 00:30:19.961 slat (nsec): min=2147, max=98510, avg=2795.81, stdev=1392.46 00:30:19.961 clat (usec): min=340, max=169231, avg=9405.32, stdev=10879.13 00:30:19.961 lat (usec): min=343, max=169236, avg=9408.12, stdev=10879.34 00:30:19.961 clat percentiles (msec): 00:30:19.961 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:19.961 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:19.961 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:19.961 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:30:19.961 | 99.99th=[ 169] 00:30:19.961 bw ( KiB/s): min=17960, max=26432, per=99.97%, avg=24170.00, stdev=4143.10, samples=4 00:30:19.961 iops : min= 4490, max= 6608, avg=6042.50, stdev=1035.74, samples=4 00:30:19.961 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:19.961 lat (msec) : 2=0.03%, 4=0.14%, 10=58.64%, 20=40.65%, 250=0.53% 00:30:19.961 cpu : usr=63.23%, sys=35.13%, ctx=102, majf=0, minf=24 00:30:19.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:19.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:19.961 issued rwts: total=12180,12137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:19.961 00:30:19.961 Run status group 0 (all jobs): 00:30:19.961 READ: bw=23.7MiB/s (24.8MB/s), 23.7MiB/s-23.7MiB/s (24.8MB/s-24.8MB/s), io=47.6MiB (49.9MB), run=2008-2008msec 00:30:19.961 WRITE: bw=23.6MiB/s (24.8MB/s), 23.6MiB/s-23.6MiB/s (24.8MB/s-24.8MB/s), io=47.4MiB (49.7MB), run=2008-2008msec 00:30:19.961 14:05:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:19.961 14:05:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:21.336 14:05:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=e6071b67-09bd-43f8-899f-9390e80b978e 00:30:21.336 14:05:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb e6071b67-09bd-43f8-899f-9390e80b978e 00:30:21.336 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=e6071b67-09bd-43f8-899f-9390e80b978e 00:30:21.336 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:21.336 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:21.336 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:21.336 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:21.336 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:21.336 { 00:30:21.336 "uuid": "c5dbad4a-2a71-4510-a893-dc22cb1edf18", 00:30:21.336 "name": "lvs_0", 00:30:21.336 "base_bdev": "Nvme0n1", 00:30:21.336 "total_data_clusters": 930, 00:30:21.336 "free_clusters": 0, 00:30:21.336 "block_size": 512, 00:30:21.336 "cluster_size": 1073741824 00:30:21.336 }, 00:30:21.336 { 00:30:21.336 "uuid": "e6071b67-09bd-43f8-899f-9390e80b978e", 00:30:21.336 "name": "lvs_n_0", 00:30:21.336 "base_bdev": "d8dff8e9-e303-43fa-a03e-c44e0a99e3e2", 00:30:21.336 "total_data_clusters": 237847, 00:30:21.336 "free_clusters": 237847, 00:30:21.336 "block_size": 512, 00:30:21.336 "cluster_size": 4194304 00:30:21.336 } 00:30:21.336 ]' 00:30:21.336 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="e6071b67-09bd-43f8-899f-9390e80b978e") .free_clusters' 00:30:21.595 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:21.595 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="e6071b67-09bd-43f8-899f-9390e80b978e") .cluster_size' 00:30:21.595 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:21.595 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:21.595 14:05:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:21.595 951388 00:30:21.595 14:05:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:22.168 c44271c1-597c-42b1-b371-f5a92b2b4f31 00:30:22.168 14:06:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:22.426 14:06:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:22.685 14:06:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:22.944 14:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.202 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:23.202 fio-3.35 00:30:23.202 Starting 1 thread 00:30:23.202 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.729 00:30:25.729 test: (groupid=0, jobs=1): err= 0: pid=1567142: Sun Jul 14 14:06:03 2024 00:30:25.729 read: IOPS=5872, BW=22.9MiB/s (24.1MB/s)(46.1MiB/2009msec) 00:30:25.729 slat (usec): min=2, max=155, avg= 2.72, stdev= 2.11 00:30:25.729 clat (usec): min=4517, max=18410, avg=11858.99, stdev=1078.94 00:30:25.729 lat (usec): min=4521, max=18412, avg=11861.71, stdev=1078.83 00:30:25.729 clat percentiles (usec): 00:30:25.729 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:30:25.729 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:30:25.729 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:30:25.729 | 99.00th=[14222], 99.50th=[14484], 99.90th=[17957], 99.95th=[18220], 00:30:25.729 | 99.99th=[18482] 00:30:25.729 bw ( KiB/s): min=22016, max=24032, per=99.91%, avg=23468.00, stdev=972.81, samples=4 00:30:25.729 iops : min= 5504, max= 6008, avg=5867.00, stdev=243.20, samples=4 00:30:25.729 write: IOPS=5863, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2009msec); 0 zone resets 00:30:25.729 slat (usec): min=2, max=108, avg= 2.81, stdev= 1.52 00:30:25.729 clat (usec): min=2185, max=18027, avg=9763.79, stdev=911.40 00:30:25.729 lat (usec): min=2191, max=18030, avg=9766.59, stdev=911.38 00:30:25.729 clat percentiles (usec): 00:30:25.729 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:25.729 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:30:25.729 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:30:25.729 | 99.00th=[11600], 99.50th=[11994], 99.90th=[15926], 99.95th=[17957], 00:30:25.729 | 99.99th=[17957] 00:30:25.729 bw ( KiB/s): min=23064, max=23624, per=99.91%, avg=23432.00, stdev=251.54, samples=4 00:30:25.729 iops : min= 5766, max= 5906, avg=5858.00, stdev=62.89, samples=4 00:30:25.729 lat (msec) : 4=0.05%, 10=32.70%, 20=67.26% 00:30:25.729 cpu : usr=59.81%, sys=38.50%, ctx=54, majf=0, minf=24 00:30:25.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:25.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:25.729 issued rwts: total=11798,11779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:25.729 00:30:25.729 Run status group 0 (all jobs): 00:30:25.729 READ: bw=22.9MiB/s (24.1MB/s), 22.9MiB/s-22.9MiB/s (24.1MB/s-24.1MB/s), io=46.1MiB (48.3MB), run=2009-2009msec 00:30:25.729 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.2MB), run=2009-2009msec 00:30:25.729 14:06:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:25.729 14:06:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:25.729 14:06:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:29.917 14:06:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:29.917 14:06:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:33.203 14:06:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:33.203 14:06:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:35.100 rmmod nvme_tcp 00:30:35.100 rmmod nvme_fabrics 00:30:35.100 rmmod nvme_keyring 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1564341 ']' 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1564341 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 1564341 ']' 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 1564341 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1564341 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1564341' 00:30:35.100 killing process with pid 1564341 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 1564341 00:30:35.100 14:06:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 1564341 00:30:35.100 14:06:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:35.100 14:06:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:35.100 14:06:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:35.100 14:06:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:35.100 14:06:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:35.100 14:06:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.100 14:06:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:35.100 14:06:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.636 14:06:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:37.636 00:30:37.636 real 0m36.907s 00:30:37.636 user 2m21.745s 00:30:37.636 sys 0m6.824s 00:30:37.636 14:06:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:37.636 14:06:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.636 ************************************ 00:30:37.636 END TEST nvmf_fio_host 00:30:37.636 ************************************ 00:30:37.636 14:06:15 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:37.636 14:06:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:37.636 14:06:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:37.636 14:06:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:37.636 ************************************ 00:30:37.636 START TEST nvmf_failover 00:30:37.636 ************************************ 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:37.636 * Looking for test storage... 00:30:37.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:37.636 14:06:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:39.539 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:39.540 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:39.540 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:39.540 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:39.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:39.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:30:39.540 00:30:39.540 --- 10.0.0.2 ping statistics --- 00:30:39.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.540 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:30:39.540 00:30:39.540 --- 10.0.0.1 ping statistics --- 00:30:39.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.540 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.540 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1570921 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1570921 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1570921 ']' 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:39.541 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:39.541 [2024-07-14 14:06:17.262098] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:39.541 [2024-07-14 14:06:17.262182] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.541 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.541 [2024-07-14 14:06:17.330707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:39.541 [2024-07-14 14:06:17.429722] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.541 [2024-07-14 14:06:17.429791] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.541 [2024-07-14 14:06:17.429807] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.541 [2024-07-14 14:06:17.429826] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.541 [2024-07-14 14:06:17.429838] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.541 [2024-07-14 14:06:17.429918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.541 [2024-07-14 14:06:17.429951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:39.541 [2024-07-14 14:06:17.429955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.800 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:39.800 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:39.800 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:39.800 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.800 14:06:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:39.800 14:06:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.800 14:06:17 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:40.060 [2024-07-14 14:06:17.790548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.060 14:06:17 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:40.319 Malloc0 00:30:40.319 14:06:18 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:40.578 14:06:18 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:40.838 14:06:18 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.838 [2024-07-14 14:06:18.799448] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.838 14:06:18 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:41.096 [2024-07-14 14:06:19.040155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:41.096 14:06:19 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:41.355 [2024-07-14 14:06:19.280931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:41.355 14:06:19 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1571209 00:30:41.355 14:06:19 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:41.355 14:06:19 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:41.355 14:06:19 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1571209 /var/tmp/bdevperf.sock 00:30:41.355 14:06:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1571209 ']' 00:30:41.355 14:06:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:41.355 14:06:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:41.355 14:06:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:41.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:41.355 14:06:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:41.355 14:06:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:41.923 14:06:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:41.923 14:06:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:41.923 14:06:19 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:42.180 NVMe0n1 00:30:42.180 14:06:19 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:42.437 00:30:42.437 14:06:20 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1571341 00:30:42.437 14:06:20 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:42.437 14:06:20 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:43.813 14:06:21 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.813 [2024-07-14 14:06:21.646772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.646851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.646867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.646888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.646903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.646916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.646928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.646941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.646953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.646966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.646978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.646990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.813 [2024-07-14 14:06:21.647563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 [2024-07-14 14:06:21.647703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874d50 is same with the state(5) to be set 00:30:43.814 14:06:21 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:47.132 14:06:24 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:47.132 00:30:47.132 14:06:25 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:47.390 [2024-07-14 14:06:25.223390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x875bd0 is same with the state(5) to be set 00:30:47.390 14:06:25 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:50.680 14:06:28 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.680 [2024-07-14 14:06:28.533518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.680 14:06:28 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:51.612 14:06:29 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:51.870 [2024-07-14 14:06:29.838434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:51.870 [2024-07-14 14:06:29.838946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876750 is same with the state(5) to be set 00:30:52.129 14:06:29 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1571341 00:30:58.711 0 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1571209 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1571209 ']' 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1571209 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1571209 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1571209' 00:30:58.711 killing process with pid 1571209 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1571209 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1571209 00:30:58.711 14:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:58.711 [2024-07-14 14:06:19.343269] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:58.711 [2024-07-14 14:06:19.343344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571209 ] 00:30:58.711 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.711 [2024-07-14 14:06:19.403832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.711 [2024-07-14 14:06:19.491689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.711 Running I/O for 15 seconds... 00:30:58.711 [2024-07-14 14:06:21.649256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.711 [2024-07-14 14:06:21.649304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.711 [2024-07-14 14:06:21.649332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.711 [2024-07-14 14:06:21.649348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.711 [2024-07-14 14:06:21.649364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.711 [2024-07-14 14:06:21.649378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.711 [2024-07-14 14:06:21.649393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.711 [2024-07-14 14:06:21.649407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.712 [2024-07-14 14:06:21.649521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.649977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.649995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.712 [2024-07-14 14:06:21.650614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.712 [2024-07-14 14:06:21.650628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.650975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.650988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.651014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.651042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.651072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.651100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.713 [2024-07-14 14:06:21.651127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.713 [2024-07-14 14:06:21.651829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.713 [2024-07-14 14:06:21.651843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.651856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.651871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.651897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.651913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.651926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.651941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.651954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.651969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.651982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.651996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.714 [2024-07-14 14:06:21.652922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.652950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.714 [2024-07-14 14:06:21.652965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.714 [2024-07-14 14:06:21.652977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78680 len:8 PRP1 0x0 PRP2 0x0 00:30:58.714 [2024-07-14 14:06:21.652990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.653052] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2104b50 was disconnected and freed. reset controller. 00:30:58.714 [2024-07-14 14:06:21.653068] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:58.714 [2024-07-14 14:06:21.653104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.714 [2024-07-14 14:06:21.653121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.653135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.714 [2024-07-14 14:06:21.653148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.714 [2024-07-14 14:06:21.653160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.714 [2024-07-14 14:06:21.653173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:21.653186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.715 [2024-07-14 14:06:21.653198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:21.653210] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.715 [2024-07-14 14:06:21.656479] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.715 [2024-07-14 14:06:21.656518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e5eb0 (9): Bad file descriptor 00:30:58.715 [2024-07-14 14:06:21.696601] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:58.715 [2024-07-14 14:06:25.225000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.715 [2024-07-14 14:06:25.225690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-14 14:06:25.225717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-14 14:06:25.225744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-14 14:06:25.225772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-14 14:06:25.225799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-14 14:06:25.225830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-14 14:06:25.225866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-14 14:06:25.225920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-14 14:06:25.225948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.715 [2024-07-14 14:06:25.225963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.715 [2024-07-14 14:06:25.225976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.225991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.226985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.226998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.227012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.227025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.227040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.227053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.227067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.227081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.227095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.227107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.227123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.227136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.227150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.227163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.227177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.716 [2024-07-14 14:06:25.227190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.716 [2024-07-14 14:06:25.227211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.227973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.227986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.228001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.717 [2024-07-14 14:06:25.228014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.228051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.717 [2024-07-14 14:06:25.228068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76312 len:8 PRP1 0x0 PRP2 0x0 00:30:58.717 [2024-07-14 14:06:25.228082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.228099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.717 [2024-07-14 14:06:25.228110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.717 [2024-07-14 14:06:25.228121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76320 len:8 PRP1 0x0 PRP2 0x0 00:30:58.717 [2024-07-14 14:06:25.228134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.228146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.717 [2024-07-14 14:06:25.228157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.717 [2024-07-14 14:06:25.228168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76328 len:8 PRP1 0x0 PRP2 0x0 00:30:58.717 [2024-07-14 14:06:25.228180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.228192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.717 [2024-07-14 14:06:25.228203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.717 [2024-07-14 14:06:25.228214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76336 len:8 PRP1 0x0 PRP2 0x0 00:30:58.717 [2024-07-14 14:06:25.228226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.228239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.717 [2024-07-14 14:06:25.228249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.717 [2024-07-14 14:06:25.228260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76344 len:8 PRP1 0x0 PRP2 0x0 00:30:58.717 [2024-07-14 14:06:25.228272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.228285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.717 [2024-07-14 14:06:25.228295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.717 [2024-07-14 14:06:25.228306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76352 len:8 PRP1 0x0 PRP2 0x0 00:30:58.717 [2024-07-14 14:06:25.228318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.228331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.717 [2024-07-14 14:06:25.228341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.717 [2024-07-14 14:06:25.228352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:30:58.717 [2024-07-14 14:06:25.228364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.228377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.717 [2024-07-14 14:06:25.228387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.717 [2024-07-14 14:06:25.228398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:30:58.717 [2024-07-14 14:06:25.228410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.717 [2024-07-14 14:06:25.228427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.717 [2024-07-14 14:06:25.228438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.717 [2024-07-14 14:06:25.228449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.228483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.228494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.228529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.228540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.228574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.228585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.228620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.228630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.228665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.228676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76416 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.228712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.228722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76424 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.228758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.228769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76432 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.228809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.228819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76440 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.228855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.228865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76448 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.228912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.228922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76456 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.228958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.228968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76464 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.228981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.228993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.229003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.229014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76472 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.229026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.229039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.229049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.229060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76480 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.229073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.229086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.229096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.229107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76488 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.229119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.229131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.229141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.229164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76496 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.229176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.229189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.718 [2024-07-14 14:06:25.229199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.718 [2024-07-14 14:06:25.229209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75664 len:8 PRP1 0x0 PRP2 0x0 00:30:58.718 [2024-07-14 14:06:25.229222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.229277] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22af5b0 was disconnected and freed. reset controller. 00:30:58.718 [2024-07-14 14:06:25.229293] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:58.718 [2024-07-14 14:06:25.229325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.718 [2024-07-14 14:06:25.229342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.229356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.718 [2024-07-14 14:06:25.229369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.229381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.718 [2024-07-14 14:06:25.229394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.229406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.718 [2024-07-14 14:06:25.229418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:25.229431] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.718 [2024-07-14 14:06:25.229479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e5eb0 (9): Bad file descriptor 00:30:58.718 [2024-07-14 14:06:25.232721] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.718 [2024-07-14 14:06:25.349437] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:58.718 [2024-07-14 14:06:29.839627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-14 14:06:29.839668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:29.839694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-14 14:06:29.839710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:29.839725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-14 14:06:29.839739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:29.839753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-14 14:06:29.839771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:29.839787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-14 14:06:29.839800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:29.839814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-14 14:06:29.839827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:29.839841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-14 14:06:29.839853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:29.839868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.718 [2024-07-14 14:06:29.839902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.718 [2024-07-14 14:06:29.839927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.839941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.839955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.839968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.839983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.839996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.840982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.840995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.841010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:58.719 [2024-07-14 14:06:29.841023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.841038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.719 [2024-07-14 14:06:29.841051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.841066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.719 [2024-07-14 14:06:29.841080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.841095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.719 [2024-07-14 14:06:29.841109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.841124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.719 [2024-07-14 14:06:29.841145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.841161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.719 [2024-07-14 14:06:29.841174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.719 [2024-07-14 14:06:29.841188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.841980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.841998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.842012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.842027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.842040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.842055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.842067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.842082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.842095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.842109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.842122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.842143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.842156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.842170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.842183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.842198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.842211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.720 [2024-07-14 14:06:29.842225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.720 [2024-07-14 14:06:29.842238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-14 14:06:29.842265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-14 14:06:29.842292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-14 14:06:29.842320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-14 14:06:29.842347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-14 14:06:29.842379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.721 [2024-07-14 14:06:29.842406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39176 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.842470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.842498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39184 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.842521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.842544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39192 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.842566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.842589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39200 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.842611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.842634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39208 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.842656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.842679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39216 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.842702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.842726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39224 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.842752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.842777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39232 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.842800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.842824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39240 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.842847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.842870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39248 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.842900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.842931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39256 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.842954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.842967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.842978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.842989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39264 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.843001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.843014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.843024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.843035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39272 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.843047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.843063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.843073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.843084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39280 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.843096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.843108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.843122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.843133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39288 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.843146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.843159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.843169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.843180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39296 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.843192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.843205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.843215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.843226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39304 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.843238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.843250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.843261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.843272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39312 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.843284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.843297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.843307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.843317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39320 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.843329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.843343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.843353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.843363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39328 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.843376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.843389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.843399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.843410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39336 len:8 PRP1 0x0 PRP2 0x0 00:30:58.721 [2024-07-14 14:06:29.843422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.721 [2024-07-14 14:06:29.843435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.721 [2024-07-14 14:06:29.843446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.721 [2024-07-14 14:06:29.843456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39344 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.843469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.843484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.843495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.843506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39352 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.843518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.843531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.843542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.843552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39360 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.843564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.843577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.843587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.843598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39368 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.843610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.843622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.843632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.843643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39376 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.843655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.843667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.843678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.843688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39384 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.843701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.843713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.843723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.843734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39392 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.843746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.843758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.843768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.843781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39400 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.843794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.843807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.843818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.843829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39408 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.843845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.843859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.843870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.843887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39416 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.843900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.843920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.843932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.843943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39424 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.843956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.843969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.843980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.843991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39432 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.844004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.844017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:58.722 [2024-07-14 14:06:29.844028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:58.722 [2024-07-14 14:06:29.844039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39440 len:8 PRP1 0x0 PRP2 0x0 00:30:58.722 [2024-07-14 14:06:29.844052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.844110] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21096d0 was disconnected and freed. reset controller. 00:30:58.722 [2024-07-14 14:06:29.844127] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:58.722 [2024-07-14 14:06:29.844159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.722 [2024-07-14 14:06:29.844177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.844192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.722 [2024-07-14 14:06:29.844205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.844219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.722 [2024-07-14 14:06:29.844232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.844246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.722 [2024-07-14 14:06:29.844259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.722 [2024-07-14 14:06:29.844272] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.722 [2024-07-14 14:06:29.844307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e5eb0 (9): Bad file descriptor 00:30:58.722 [2024-07-14 14:06:29.847567] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.722 [2024-07-14 14:06:29.885563] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:58.722 00:30:58.722 Latency(us) 00:30:58.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.722 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:58.722 Verification LBA range: start 0x0 length 0x4000 00:30:58.722 NVMe0n1 : 15.01 8722.23 34.07 481.45 0.00 13879.34 558.27 16699.54 00:30:58.722 =================================================================================================================== 00:30:58.722 Total : 8722.23 34.07 481.45 0.00 13879.34 558.27 16699.54 00:30:58.722 Received shutdown signal, test time was about 15.000000 seconds 00:30:58.722 00:30:58.722 Latency(us) 00:30:58.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.722 =================================================================================================================== 00:30:58.722 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1573180 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1573180 /var/tmp/bdevperf.sock 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1573180 ']' 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:58.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:58.722 14:06:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:58.722 14:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:58.722 14:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:58.722 14:06:36 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:58.722 [2024-07-14 14:06:36.421752] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:58.722 14:06:36 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:58.722 [2024-07-14 14:06:36.670435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:58.980 14:06:36 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:59.238 NVMe0n1 00:30:59.238 14:06:37 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:59.496 00:30:59.496 14:06:37 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:59.754 00:30:59.754 14:06:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:59.754 14:06:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:00.012 14:06:37 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:00.271 14:06:38 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:03.556 14:06:41 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:03.556 14:06:41 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:03.556 14:06:41 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1573848 00:31:03.556 14:06:41 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:03.556 14:06:41 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1573848 00:31:04.930 0 00:31:04.930 14:06:42 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:04.930 [2024-07-14 14:06:35.895740] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:04.930 [2024-07-14 14:06:35.895818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573180 ] 00:31:04.930 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.930 [2024-07-14 14:06:35.956223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.931 [2024-07-14 14:06:36.038683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.931 [2024-07-14 14:06:38.167926] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:04.931 [2024-07-14 14:06:38.168019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.931 [2024-07-14 14:06:38.168041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.931 [2024-07-14 14:06:38.168057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.931 [2024-07-14 14:06:38.168071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.931 [2024-07-14 14:06:38.168084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.931 [2024-07-14 14:06:38.168098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.931 [2024-07-14 14:06:38.168111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.931 [2024-07-14 14:06:38.168124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.931 [2024-07-14 14:06:38.168137] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:04.931 [2024-07-14 14:06:38.168178] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:04.931 [2024-07-14 14:06:38.168209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a7eb0 (9): Bad file descriptor 00:31:04.931 [2024-07-14 14:06:38.258996] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:04.931 Running I/O for 1 seconds... 00:31:04.931 00:31:04.931 Latency(us) 00:31:04.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.931 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:04.931 Verification LBA range: start 0x0 length 0x4000 00:31:04.931 NVMe0n1 : 1.02 8767.27 34.25 0.00 0.00 14535.11 3106.89 16311.18 00:31:04.931 =================================================================================================================== 00:31:04.931 Total : 8767.27 34.25 0.00 0.00 14535.11 3106.89 16311.18 00:31:04.931 14:06:42 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:04.931 14:06:42 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:04.931 14:06:42 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:05.188 14:06:43 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:05.188 14:06:43 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:05.446 14:06:43 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:05.704 14:06:43 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1573180 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1573180 ']' 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1573180 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1573180 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1573180' 00:31:08.992 killing process with pid 1573180 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1573180 00:31:08.992 14:06:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1573180 00:31:09.279 14:06:47 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:09.279 14:06:47 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:09.539 rmmod nvme_tcp 00:31:09.539 rmmod nvme_fabrics 00:31:09.539 rmmod nvme_keyring 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1570921 ']' 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1570921 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1570921 ']' 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1570921 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1570921 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1570921' 00:31:09.539 killing process with pid 1570921 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1570921 00:31:09.539 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1570921 00:31:09.799 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:09.799 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:09.799 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:09.799 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:09.799 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:09.799 14:06:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.799 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:09.799 14:06:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.703 14:06:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:11.703 00:31:11.703 real 0m34.540s 00:31:11.703 user 2m2.219s 00:31:11.703 sys 0m5.570s 00:31:11.703 14:06:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:11.703 14:06:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:11.703 ************************************ 00:31:11.703 END TEST nvmf_failover 00:31:11.703 ************************************ 00:31:11.703 14:06:49 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:11.962 14:06:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:11.962 14:06:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:11.962 14:06:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:11.962 ************************************ 00:31:11.962 START TEST nvmf_host_discovery 00:31:11.962 ************************************ 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:11.962 * Looking for test storage... 00:31:11.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:11.962 14:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.866 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:13.866 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:13.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:13.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:13.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:13.867 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:14.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:31:14.126 00:31:14.126 --- 10.0.0.2 ping statistics --- 00:31:14.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.126 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:31:14.126 00:31:14.126 --- 10.0.0.1 ping statistics --- 00:31:14.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.126 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1576443 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1576443 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1576443 ']' 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:14.126 14:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.126 [2024-07-14 14:06:51.956952] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:14.126 [2024-07-14 14:06:51.957046] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.126 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.126 [2024-07-14 14:06:52.023016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.384 [2024-07-14 14:06:52.112734] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.384 [2024-07-14 14:06:52.112783] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.384 [2024-07-14 14:06:52.112810] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.384 [2024-07-14 14:06:52.112822] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.384 [2024-07-14 14:06:52.112832] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.384 [2024-07-14 14:06:52.112863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.384 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:14.384 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.385 [2024-07-14 14:06:52.257809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.385 [2024-07-14 14:06:52.266023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.385 null0 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.385 null1 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1576472 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1576472 /tmp/host.sock 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1576472 ']' 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:14.385 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:14.385 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.385 [2024-07-14 14:06:52.338458] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:14.385 [2024-07-14 14:06:52.338536] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576472 ] 00:31:14.643 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.643 [2024-07-14 14:06:52.402334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.643 [2024-07-14 14:06:52.494187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:14.643 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:14.901 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:14.901 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.901 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.901 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:14.901 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:14.901 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:14.902 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.162 [2024-07-14 14:06:52.891646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.162 14:06:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.162 14:06:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:15.162 14:06:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:15.162 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:15.162 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:15.162 14:06:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:15.162 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.162 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.162 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.162 14:06:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:15.162 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:15.162 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:15.163 14:06:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:15.736 [2024-07-14 14:06:53.675541] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:15.736 [2024-07-14 14:06:53.675578] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:15.736 [2024-07-14 14:06:53.675603] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:15.993 [2024-07-14 14:06:53.762868] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:15.993 [2024-07-14 14:06:53.866672] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:15.993 [2024-07-14 14:06:53.866699] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.250 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.251 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:16.509 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.767 [2024-07-14 14:06:54.564567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:16.767 [2024-07-14 14:06:54.565339] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:16.767 [2024-07-14 14:06:54.565374] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:16.767 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.768 [2024-07-14 14:06:54.651055] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:16.768 14:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:17.025 [2024-07-14 14:06:54.749671] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:17.025 [2024-07-14 14:06:54.749696] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:17.025 [2024-07-14 14:06:54.749706] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:17.959 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.960 [2024-07-14 14:06:55.789203] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:17.960 [2024-07-14 14:06:55.789250] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:17.960 [2024-07-14 14:06:55.797353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.960 [2024-07-14 14:06:55.797382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.960 [2024-07-14 14:06:55.797398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.960 [2024-07-14 14:06:55.797428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.960 [2024-07-14 14:06:55.797444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.960 [2024-07-14 14:06:55.797457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.960 [2024-07-14 14:06:55.797471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.960 [2024-07-14 14:06:55.797485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.960 [2024-07-14 14:06:55.797497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155450 is same with the state(5) to be set 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.960 [2024-07-14 14:06:55.807349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155450 (9): Bad file descriptor 00:31:17.960 [2024-07-14 14:06:55.817395] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:17.960 [2024-07-14 14:06:55.817591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.960 [2024-07-14 14:06:55.817623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2155450 with addr=10.0.0.2, port=4420 00:31:17.960 [2024-07-14 14:06:55.817641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155450 is same with the state(5) to be set 00:31:17.960 [2024-07-14 14:06:55.817666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155450 (9): Bad file descriptor 00:31:17.960 [2024-07-14 14:06:55.817690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:17.960 [2024-07-14 14:06:55.817706] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:17.960 [2024-07-14 14:06:55.817722] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:17.960 [2024-07-14 14:06:55.817744] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.960 [2024-07-14 14:06:55.827473] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:17.960 [2024-07-14 14:06:55.827644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.960 [2024-07-14 14:06:55.827675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2155450 with addr=10.0.0.2, port=4420 00:31:17.960 [2024-07-14 14:06:55.827693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155450 is same with the state(5) to be set 00:31:17.960 [2024-07-14 14:06:55.827717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155450 (9): Bad file descriptor 00:31:17.960 [2024-07-14 14:06:55.827740] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:17.960 [2024-07-14 14:06:55.827756] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:17.960 [2024-07-14 14:06:55.827777] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:17.960 [2024-07-14 14:06:55.827799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.960 [2024-07-14 14:06:55.837552] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:17.960 [2024-07-14 14:06:55.837723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.960 [2024-07-14 14:06:55.837756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2155450 with addr=10.0.0.2, port=4420 00:31:17.960 [2024-07-14 14:06:55.837774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155450 is same with the state(5) to be set 00:31:17.960 [2024-07-14 14:06:55.837798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155450 (9): Bad file descriptor 00:31:17.960 [2024-07-14 14:06:55.837821] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:17.960 [2024-07-14 14:06:55.837837] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:17.960 [2024-07-14 14:06:55.837852] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:17.960 [2024-07-14 14:06:55.837896] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:17.960 [2024-07-14 14:06:55.847634] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:17.960 [2024-07-14 14:06:55.847821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.960 [2024-07-14 14:06:55.847853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2155450 with addr=10.0.0.2, port=4420 00:31:17.960 [2024-07-14 14:06:55.847870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155450 is same with the state(5) to be set 00:31:17.960 [2024-07-14 14:06:55.847938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155450 (9): Bad file descriptor 00:31:17.960 [2024-07-14 14:06:55.847963] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:17.960 [2024-07-14 14:06:55.847977] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:17.960 [2024-07-14 14:06:55.847990] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:17.960 [2024-07-14 14:06:55.848009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.960 [2024-07-14 14:06:55.857714] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:17.960 [2024-07-14 14:06:55.857868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.960 [2024-07-14 14:06:55.857908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2155450 with addr=10.0.0.2, port=4420 00:31:17.960 [2024-07-14 14:06:55.857941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155450 is same with the state(5) to be set 00:31:17.960 [2024-07-14 14:06:55.857963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155450 (9): Bad file descriptor 00:31:17.960 [2024-07-14 14:06:55.857997] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:17.960 [2024-07-14 14:06:55.858015] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:17.960 [2024-07-14 14:06:55.858028] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:17.960 [2024-07-14 14:06:55.858046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.960 [2024-07-14 14:06:55.867797] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:17.960 [2024-07-14 14:06:55.867970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.960 [2024-07-14 14:06:55.867997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2155450 with addr=10.0.0.2, port=4420 00:31:17.960 [2024-07-14 14:06:55.868013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155450 is same with the state(5) to be set 00:31:17.960 [2024-07-14 14:06:55.868035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155450 (9): Bad file descriptor 00:31:17.960 [2024-07-14 14:06:55.868056] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:17.960 [2024-07-14 14:06:55.868069] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:17.960 [2024-07-14 14:06:55.868083] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:17.960 [2024-07-14 14:06:55.868101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.960 [2024-07-14 14:06:55.877807] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:17.960 [2024-07-14 14:06:55.877837] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:17.960 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:18.220 14:06:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:18.220 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.221 14:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.599 [2024-07-14 14:06:57.163995] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:19.599 [2024-07-14 14:06:57.164032] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:19.599 [2024-07-14 14:06:57.164055] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:19.599 [2024-07-14 14:06:57.251350] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:19.599 [2024-07-14 14:06:57.521235] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:19.599 [2024-07-14 14:06:57.521293] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.599 request: 00:31:19.599 { 00:31:19.599 "name": "nvme", 00:31:19.599 "trtype": "tcp", 00:31:19.599 "traddr": "10.0.0.2", 00:31:19.599 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:19.599 "adrfam": "ipv4", 00:31:19.599 "trsvcid": "8009", 00:31:19.599 "wait_for_attach": true, 00:31:19.599 "method": "bdev_nvme_start_discovery", 00:31:19.599 "req_id": 1 00:31:19.599 } 00:31:19.599 Got JSON-RPC error response 00:31:19.599 response: 00:31:19.599 { 00:31:19.599 "code": -17, 00:31:19.599 "message": "File exists" 00:31:19.599 } 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.599 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.858 request: 00:31:19.858 { 00:31:19.858 "name": "nvme_second", 00:31:19.858 "trtype": "tcp", 00:31:19.858 "traddr": "10.0.0.2", 00:31:19.858 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:19.858 "adrfam": "ipv4", 00:31:19.858 "trsvcid": "8009", 00:31:19.858 "wait_for_attach": true, 00:31:19.858 "method": "bdev_nvme_start_discovery", 00:31:19.858 "req_id": 1 00:31:19.858 } 00:31:19.858 Got JSON-RPC error response 00:31:19.858 response: 00:31:19.858 { 00:31:19.858 "code": -17, 00:31:19.858 "message": "File exists" 00:31:19.858 } 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.858 14:06:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.793 [2024-07-14 14:06:58.724619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.793 [2024-07-14 14:06:58.724675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2156190 with addr=10.0.0.2, port=8010 00:31:20.793 [2024-07-14 14:06:58.724714] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:20.793 [2024-07-14 14:06:58.724730] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:20.793 [2024-07-14 14:06:58.724744] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:22.163 [2024-07-14 14:06:59.727059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.163 [2024-07-14 14:06:59.727093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2156190 with addr=10.0.0.2, port=8010 00:31:22.163 [2024-07-14 14:06:59.727113] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:22.163 [2024-07-14 14:06:59.727125] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:22.163 [2024-07-14 14:06:59.727136] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:23.099 [2024-07-14 14:07:00.729342] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:23.099 request: 00:31:23.099 { 00:31:23.099 "name": "nvme_second", 00:31:23.099 "trtype": "tcp", 00:31:23.099 "traddr": "10.0.0.2", 00:31:23.099 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:23.099 "adrfam": "ipv4", 00:31:23.099 "trsvcid": "8010", 00:31:23.099 "attach_timeout_ms": 3000, 00:31:23.099 "method": "bdev_nvme_start_discovery", 00:31:23.099 "req_id": 1 00:31:23.099 } 00:31:23.099 Got JSON-RPC error response 00:31:23.099 response: 00:31:23.099 { 00:31:23.099 "code": -110, 00:31:23.099 "message": "Connection timed out" 00:31:23.099 } 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1576472 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:23.099 14:07:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:23.100 rmmod nvme_tcp 00:31:23.100 rmmod nvme_fabrics 00:31:23.100 rmmod nvme_keyring 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1576443 ']' 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1576443 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 1576443 ']' 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 1576443 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1576443 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1576443' 00:31:23.100 killing process with pid 1576443 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 1576443 00:31:23.100 14:07:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 1576443 00:31:23.358 14:07:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:23.358 14:07:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:23.358 14:07:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:23.358 14:07:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:23.358 14:07:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:23.358 14:07:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.358 14:07:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:23.358 14:07:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:25.268 00:31:25.268 real 0m13.421s 00:31:25.268 user 0m19.463s 00:31:25.268 sys 0m2.906s 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.268 ************************************ 00:31:25.268 END TEST nvmf_host_discovery 00:31:25.268 ************************************ 00:31:25.268 14:07:03 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:25.268 14:07:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:25.268 14:07:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:25.268 14:07:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:25.268 ************************************ 00:31:25.268 START TEST nvmf_host_multipath_status 00:31:25.268 ************************************ 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:25.268 * Looking for test storage... 00:31:25.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.268 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:25.527 14:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.428 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:27.429 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:27.429 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:27.429 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:27.429 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:27.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:31:27.429 00:31:27.429 --- 10.0.0.2 ping statistics --- 00:31:27.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.429 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:31:27.429 00:31:27.429 --- 10.0.0.1 ping statistics --- 00:31:27.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.429 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1579618 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1579618 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1579618 ']' 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:27.429 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:27.694 [2024-07-14 14:07:05.410414] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:27.694 [2024-07-14 14:07:05.410486] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.694 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.694 [2024-07-14 14:07:05.474402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:27.694 [2024-07-14 14:07:05.558136] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.694 [2024-07-14 14:07:05.558190] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.694 [2024-07-14 14:07:05.558218] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.694 [2024-07-14 14:07:05.558230] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.694 [2024-07-14 14:07:05.558239] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.694 [2024-07-14 14:07:05.558290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.694 [2024-07-14 14:07:05.558294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.694 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:27.694 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:27.694 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:27.694 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.694 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:28.002 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.002 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1579618 00:31:28.002 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:28.002 [2024-07-14 14:07:05.923166] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.002 14:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:28.569 Malloc0 00:31:28.569 14:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:28.569 14:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:28.828 14:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:29.086 [2024-07-14 14:07:06.987491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.086 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:29.344 [2024-07-14 14:07:07.232093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:29.344 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1579788 00:31:29.344 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:29.344 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:29.344 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1579788 /var/tmp/bdevperf.sock 00:31:29.344 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1579788 ']' 00:31:29.344 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:29.344 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:29.344 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:29.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:29.344 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:29.344 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:29.602 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:29.602 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:29.602 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:29.860 14:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:30.428 Nvme0n1 00:31:30.428 14:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:30.995 Nvme0n1 00:31:30.995 14:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:30.995 14:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:32.903 14:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:32.903 14:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:33.161 14:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:33.419 14:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:34.350 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:34.350 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:34.350 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.350 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:34.607 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.607 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:34.607 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.607 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:34.865 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.865 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:34.865 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.865 14:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:35.123 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.123 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:35.123 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.123 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:35.389 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.389 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:35.389 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.389 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:35.647 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.647 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:35.647 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.647 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:35.903 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.903 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:35.903 14:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:36.160 14:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:36.418 14:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:37.357 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:37.357 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:37.357 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.357 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:37.615 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.615 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:37.615 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.615 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:37.873 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.873 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:37.873 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.873 14:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:38.130 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.130 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:38.130 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.131 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:38.388 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.388 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:38.388 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.388 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:38.646 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.646 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:38.646 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.646 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:38.904 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.904 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:38.904 14:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:39.163 14:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:39.423 14:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:40.359 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:40.359 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:40.359 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.359 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:40.617 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.617 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:40.617 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.617 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:40.875 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.875 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:40.875 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.875 14:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:41.132 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.132 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:41.132 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.132 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:41.389 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.389 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:41.389 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.389 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.645 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.645 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:41.645 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.645 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:41.902 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.902 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:41.902 14:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:42.160 14:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:42.419 14:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:43.808 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:43.808 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:43.808 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.808 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:43.808 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.808 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:43.808 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.808 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:44.078 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:44.078 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:44.078 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.078 14:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:44.335 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.335 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:44.335 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.335 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:44.594 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.594 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:44.594 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.594 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:44.852 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.852 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:44.852 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.852 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:45.112 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:45.112 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:45.112 14:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:45.112 14:07:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:45.371 14:07:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:46.746 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:46.746 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:46.746 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.746 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:46.746 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:46.746 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:46.746 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.746 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:47.004 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:47.004 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:47.004 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.004 14:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:47.261 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.261 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:47.261 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.261 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:47.519 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.519 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:47.519 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.519 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:47.776 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:47.776 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:47.776 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.776 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:48.034 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:48.034 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:48.034 14:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:48.292 14:07:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:48.550 14:07:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:49.486 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:49.486 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:49.486 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.486 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:49.744 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:49.744 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:49.744 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.744 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:50.002 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.002 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:50.002 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.002 14:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:50.261 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.261 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:50.261 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.261 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:50.519 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.519 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:50.519 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.519 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:50.778 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:50.778 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:50.778 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.778 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:51.036 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.036 14:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:51.294 14:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:51.294 14:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:51.552 14:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:51.812 14:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:52.756 14:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:52.756 14:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:52.756 14:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.756 14:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:53.013 14:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.013 14:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:53.013 14:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.013 14:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:53.272 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.272 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:53.272 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.272 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:53.529 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.529 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:53.529 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.529 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:53.786 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.786 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:53.786 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.786 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:54.043 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.043 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:54.043 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.043 14:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:54.301 14:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.301 14:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:54.301 14:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:54.558 14:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:54.817 14:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:55.765 14:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:55.765 14:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:55.765 14:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.765 14:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:56.022 14:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:56.022 14:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:56.022 14:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.022 14:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:56.279 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.279 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:56.279 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.279 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:56.537 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.537 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:56.537 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.537 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:56.795 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.795 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:56.795 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.795 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:57.054 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.054 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:57.054 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.054 14:07:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:57.312 14:07:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.312 14:07:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:57.312 14:07:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:57.570 14:07:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:57.830 14:07:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:58.767 14:07:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:58.767 14:07:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:58.767 14:07:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.767 14:07:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:59.025 14:07:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.025 14:07:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:59.025 14:07:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.025 14:07:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:59.283 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.283 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:59.283 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.283 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:59.540 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.540 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:59.540 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.540 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:59.820 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.820 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:59.820 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.820 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:00.088 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.088 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:00.088 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.088 14:07:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:00.344 14:07:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.344 14:07:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:00.344 14:07:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:00.602 14:07:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:00.860 14:07:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:01.793 14:07:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:01.793 14:07:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:01.793 14:07:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.793 14:07:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:02.050 14:07:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.050 14:07:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:02.050 14:07:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.050 14:07:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:02.308 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:02.308 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:02.308 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.308 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:02.565 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.565 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:02.565 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.565 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:02.823 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.823 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:02.823 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.823 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:03.081 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.081 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:03.081 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.081 14:07:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1579788 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1579788 ']' 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1579788 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1579788 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1579788' 00:32:03.339 killing process with pid 1579788 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1579788 00:32:03.339 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1579788 00:32:03.600 Connection closed with partial response: 00:32:03.600 00:32:03.600 00:32:03.600 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1579788 00:32:03.600 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:03.600 [2024-07-14 14:07:07.289535] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:03.600 [2024-07-14 14:07:07.289607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579788 ] 00:32:03.600 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.600 [2024-07-14 14:07:07.350481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.600 [2024-07-14 14:07:07.438852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:03.600 Running I/O for 90 seconds... 00:32:03.600 [2024-07-14 14:07:23.068907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.600 [2024-07-14 14:07:23.068981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.069040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.600 [2024-07-14 14:07:23.069063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.069087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.600 [2024-07-14 14:07:23.069104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.069127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.600 [2024-07-14 14:07:23.069143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.069169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.600 [2024-07-14 14:07:23.069185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.069207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.600 [2024-07-14 14:07:23.069224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.069262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.600 [2024-07-14 14:07:23.069279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.069317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.600 [2024-07-14 14:07:23.069333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.600 [2024-07-14 14:07:23.070282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.600 [2024-07-14 14:07:23.070810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:03.600 [2024-07-14 14:07:23.070831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.070846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.070889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.070907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.070934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.070950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.070972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.601 [2024-07-14 14:07:23.070988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.071966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.071989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:03.601 [2024-07-14 14:07:23.072646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.601 [2024-07-14 14:07:23.072662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.072685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.072700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.072723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.072753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.072776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.072791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.072813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.072829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.072851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.072867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.072911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.072929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.072954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.072970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.072994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.073977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.073993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:03.602 [2024-07-14 14:07:23.074634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.602 [2024-07-14 14:07:23.074650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.074676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.074692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.074718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.603 [2024-07-14 14:07:23.074733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.074760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.603 [2024-07-14 14:07:23.074775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.074801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.603 [2024-07-14 14:07:23.074817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.074843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.603 [2024-07-14 14:07:23.074884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.074915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.603 [2024-07-14 14:07:23.074932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.074959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.603 [2024-07-14 14:07:23.074975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.603 [2024-07-14 14:07:23.075019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:23.075733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:23.075750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:38.592744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:38.592832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:38.592913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:38.592945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:38.593324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:38.593348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:38.593374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:03.603 [2024-07-14 14:07:38.593391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:38.593414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.603 [2024-07-14 14:07:38.593430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:38.593466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.603 [2024-07-14 14:07:38.593484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:38.593506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.603 [2024-07-14 14:07:38.593522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:03.603 [2024-07-14 14:07:38.593559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.593575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.593596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.593612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.593634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.593649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.593686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.593702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.593726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.593742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.595965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.595987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.604 [2024-07-14 14:07:38.596767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:03.604 [2024-07-14 14:07:38.596789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.605 [2024-07-14 14:07:38.596805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:03.605 [2024-07-14 14:07:38.596826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.605 [2024-07-14 14:07:38.596841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:03.605 [2024-07-14 14:07:38.596885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.605 [2024-07-14 14:07:38.596904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:03.605 [2024-07-14 14:07:38.596927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.605 [2024-07-14 14:07:38.596943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:03.605 [2024-07-14 14:07:38.596965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.605 [2024-07-14 14:07:38.596981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:03.605 [2024-07-14 14:07:38.597003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.605 [2024-07-14 14:07:38.597019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:03.605 [2024-07-14 14:07:38.597041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.605 [2024-07-14 14:07:38.597058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:03.605 [2024-07-14 14:07:38.597083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.605 [2024-07-14 14:07:38.597100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:03.605 [2024-07-14 14:07:38.597122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.605 [2024-07-14 14:07:38.597138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:03.605 [2024-07-14 14:07:38.597161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.605 [2024-07-14 14:07:38.597192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:03.605 Received shutdown signal, test time was about 32.346148 seconds 00:32:03.605 00:32:03.605 Latency(us) 00:32:03.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.605 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:03.605 Verification LBA range: start 0x0 length 0x4000 00:32:03.605 Nvme0n1 : 32.35 8127.17 31.75 0.00 0.00 15722.07 277.62 4026531.84 00:32:03.605 =================================================================================================================== 00:32:03.605 Total : 8127.17 31.75 0.00 0.00 15722.07 277.62 4026531.84 00:32:03.605 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:03.864 rmmod nvme_tcp 00:32:03.864 rmmod nvme_fabrics 00:32:03.864 rmmod nvme_keyring 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1579618 ']' 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1579618 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1579618 ']' 00:32:03.864 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1579618 00:32:03.865 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:03.865 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:03.865 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1579618 00:32:03.865 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:03.865 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:03.865 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1579618' 00:32:03.865 killing process with pid 1579618 00:32:03.865 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1579618 00:32:03.865 14:07:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1579618 00:32:04.122 14:07:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:04.122 14:07:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:04.122 14:07:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:04.122 14:07:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:04.122 14:07:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:04.122 14:07:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.122 14:07:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:04.122 14:07:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.655 14:07:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:06.655 00:32:06.655 real 0m40.869s 00:32:06.655 user 2m1.219s 00:32:06.655 sys 0m11.269s 00:32:06.655 14:07:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:06.655 14:07:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:06.655 ************************************ 00:32:06.655 END TEST nvmf_host_multipath_status 00:32:06.655 ************************************ 00:32:06.655 14:07:44 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:06.655 14:07:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:06.655 14:07:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:06.655 14:07:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:06.655 ************************************ 00:32:06.655 START TEST nvmf_discovery_remove_ifc 00:32:06.655 ************************************ 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:06.655 * Looking for test storage... 00:32:06.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.655 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:06.656 14:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:08.586 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:08.586 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.586 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:08.587 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:08.587 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:08.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:32:08.587 00:32:08.587 --- 10.0.0.2 ping statistics --- 00:32:08.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.587 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:32:08.587 00:32:08.587 --- 10.0.0.1 ping statistics --- 00:32:08.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.587 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1585965 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1585965 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1585965 ']' 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:08.587 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.587 [2024-07-14 14:07:46.359626] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:08.587 [2024-07-14 14:07:46.359707] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.587 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.587 [2024-07-14 14:07:46.428955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.587 [2024-07-14 14:07:46.517310] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.587 [2024-07-14 14:07:46.517373] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.587 [2024-07-14 14:07:46.517389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.587 [2024-07-14 14:07:46.517402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.587 [2024-07-14 14:07:46.517413] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.587 [2024-07-14 14:07:46.517444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.846 [2024-07-14 14:07:46.677559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.846 [2024-07-14 14:07:46.685768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:08.846 null0 00:32:08.846 [2024-07-14 14:07:46.717680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1585992 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1585992 /tmp/host.sock 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1585992 ']' 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:08.846 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:08.846 14:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.846 [2024-07-14 14:07:46.786023] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:08.846 [2024-07-14 14:07:46.786098] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585992 ] 00:32:08.846 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.105 [2024-07-14 14:07:46.854660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.105 [2024-07-14 14:07:46.954213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.105 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:09.105 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:09.105 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:09.105 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:09.105 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.105 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.105 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.105 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:09.105 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.105 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.365 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.365 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:09.365 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.365 14:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.305 [2024-07-14 14:07:48.227059] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:10.305 [2024-07-14 14:07:48.227092] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:10.305 [2024-07-14 14:07:48.227114] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:10.563 [2024-07-14 14:07:48.314413] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:10.563 [2024-07-14 14:07:48.417128] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:10.563 [2024-07-14 14:07:48.417206] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:10.563 [2024-07-14 14:07:48.417257] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:10.563 [2024-07-14 14:07:48.417284] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:10.563 [2024-07-14 14:07:48.417324] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.563 [2024-07-14 14:07:48.423829] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x911df0 was disconnected and freed. delete nvme_qpair. 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.563 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.822 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:10.822 14:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:11.759 14:07:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:11.759 14:07:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.759 14:07:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.759 14:07:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:11.759 14:07:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.759 14:07:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:11.759 14:07:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:11.759 14:07:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.759 14:07:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:11.759 14:07:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:12.695 14:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:12.695 14:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.695 14:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:12.695 14:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.695 14:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:12.695 14:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:12.695 14:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:12.695 14:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.695 14:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:12.695 14:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:14.074 14:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:14.074 14:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.074 14:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.074 14:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:14.074 14:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:14.074 14:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:14.074 14:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:14.074 14:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.074 14:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:14.074 14:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:15.010 14:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:15.010 14:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.010 14:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.010 14:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:15.010 14:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:15.010 14:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:15.010 14:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:15.010 14:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.010 14:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:15.010 14:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:15.948 14:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:15.948 14:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.948 14:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:15.948 14:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.948 14:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:15.948 14:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:15.948 14:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:15.948 14:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.948 14:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:15.948 14:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:15.948 [2024-07-14 14:07:53.858185] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:15.948 [2024-07-14 14:07:53.858265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.948 [2024-07-14 14:07:53.858288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.948 [2024-07-14 14:07:53.858308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.948 [2024-07-14 14:07:53.858323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.948 [2024-07-14 14:07:53.858339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.948 [2024-07-14 14:07:53.858354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.948 [2024-07-14 14:07:53.858369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.948 [2024-07-14 14:07:53.858384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.948 [2024-07-14 14:07:53.858411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.948 [2024-07-14 14:07:53.858427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.948 [2024-07-14 14:07:53.858441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8f80 is same with the state(5) to be set 00:32:15.948 [2024-07-14 14:07:53.868186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d8f80 (9): Bad file descriptor 00:32:15.948 [2024-07-14 14:07:53.878236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:16.886 14:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:16.886 14:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.886 14:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:16.886 14:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.886 14:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.886 14:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:16.886 14:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:17.147 [2024-07-14 14:07:54.898940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:17.147 [2024-07-14 14:07:54.899014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d8f80 with addr=10.0.0.2, port=4420 00:32:17.147 [2024-07-14 14:07:54.899039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8f80 is same with the state(5) to be set 00:32:17.147 [2024-07-14 14:07:54.899094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d8f80 (9): Bad file descriptor 00:32:17.147 [2024-07-14 14:07:54.899578] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:17.147 [2024-07-14 14:07:54.899612] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:17.147 [2024-07-14 14:07:54.899631] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:17.147 [2024-07-14 14:07:54.899649] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:17.147 [2024-07-14 14:07:54.899683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:17.147 [2024-07-14 14:07:54.899702] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:17.147 14:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.147 14:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:17.147 14:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:18.118 [2024-07-14 14:07:55.902212] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:18.118 [2024-07-14 14:07:55.902297] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:18.118 [2024-07-14 14:07:55.902315] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:18.118 [2024-07-14 14:07:55.902332] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:18.118 [2024-07-14 14:07:55.902366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:18.118 [2024-07-14 14:07:55.902409] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:18.118 [2024-07-14 14:07:55.902480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.118 [2024-07-14 14:07:55.902503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.118 [2024-07-14 14:07:55.902536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-07-14 14:07:55.902552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-07-14 14:07:55.902568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-07-14 14:07:55.902582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-07-14 14:07:55.902597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-07-14 14:07:55.902612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-07-14 14:07:55.902627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-07-14 14:07:55.902641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-07-14 14:07:55.902656] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:18.119 [2024-07-14 14:07:55.902788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d8410 (9): Bad file descriptor 00:32:18.119 [2024-07-14 14:07:55.903804] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:18.119 [2024-07-14 14:07:55.903830] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:18.119 14:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:18.119 14:07:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.119 14:07:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:18.119 14:07:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:19.493 14:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:19.493 14:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.493 14:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:19.493 14:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.493 14:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.493 14:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:19.493 14:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:19.493 14:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.493 14:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:19.493 14:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:20.062 [2024-07-14 14:07:57.960998] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:20.062 [2024-07-14 14:07:57.961023] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:20.062 [2024-07-14 14:07:57.961045] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:20.320 [2024-07-14 14:07:58.047368] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:20.320 14:07:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:20.320 14:07:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.320 14:07:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:20.320 14:07:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.320 14:07:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:20.320 14:07:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:20.320 14:07:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:20.320 14:07:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.320 [2024-07-14 14:07:58.103219] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:20.320 [2024-07-14 14:07:58.103273] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:20.320 [2024-07-14 14:07:58.103310] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:20.320 [2024-07-14 14:07:58.103336] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:20.320 [2024-07-14 14:07:58.103351] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:20.320 [2024-07-14 14:07:58.109476] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x8e5d30 was disconnected and freed. delete nvme_qpair. 00:32:20.320 14:07:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:20.320 14:07:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:21.259 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:21.259 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:21.259 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1585992 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1585992 ']' 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1585992 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1585992 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1585992' 00:32:21.260 killing process with pid 1585992 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1585992 00:32:21.260 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1585992 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:21.519 rmmod nvme_tcp 00:32:21.519 rmmod nvme_fabrics 00:32:21.519 rmmod nvme_keyring 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1585965 ']' 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1585965 00:32:21.519 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1585965 ']' 00:32:21.778 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1585965 00:32:21.778 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:21.778 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:21.778 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1585965 00:32:21.778 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:21.778 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:21.778 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1585965' 00:32:21.778 killing process with pid 1585965 00:32:21.778 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1585965 00:32:21.778 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1585965 00:32:22.039 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:22.039 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:22.039 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:22.039 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:22.039 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:22.039 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.039 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:22.039 14:07:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.943 14:08:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:23.943 00:32:23.943 real 0m17.700s 00:32:23.943 user 0m25.649s 00:32:23.943 sys 0m3.056s 00:32:23.943 14:08:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:23.943 14:08:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.943 ************************************ 00:32:23.943 END TEST nvmf_discovery_remove_ifc 00:32:23.943 ************************************ 00:32:23.943 14:08:01 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:23.943 14:08:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:23.943 14:08:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:23.943 14:08:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.943 ************************************ 00:32:23.943 START TEST nvmf_identify_kernel_target 00:32:23.943 ************************************ 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:23.943 * Looking for test storage... 00:32:23.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:23.943 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:24.203 14:08:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:26.102 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:26.102 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:26.103 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:26.103 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:26.103 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:26.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:32:26.103 00:32:26.103 --- 10.0.0.2 ping statistics --- 00:32:26.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.103 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:32:26.103 00:32:26.103 --- 10.0.0.1 ping statistics --- 00:32:26.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.103 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:26.103 14:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:27.038 Waiting for block devices as requested 00:32:27.038 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:27.296 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:27.296 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:27.296 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:27.554 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:27.554 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:27.554 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:27.554 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:27.813 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:27.813 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:27.813 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:27.813 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:28.073 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:28.073 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:28.073 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:28.331 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:28.331 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:28.331 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:28.331 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:28.331 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:28.331 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:28.331 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:28.331 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:28.331 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:28.331 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:28.331 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:28.331 No valid GPT data, bailing 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:28.591 00:32:28.591 Discovery Log Number of Records 2, Generation counter 2 00:32:28.591 =====Discovery Log Entry 0====== 00:32:28.591 trtype: tcp 00:32:28.591 adrfam: ipv4 00:32:28.591 subtype: current discovery subsystem 00:32:28.591 treq: not specified, sq flow control disable supported 00:32:28.591 portid: 1 00:32:28.591 trsvcid: 4420 00:32:28.591 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:28.591 traddr: 10.0.0.1 00:32:28.591 eflags: none 00:32:28.591 sectype: none 00:32:28.591 =====Discovery Log Entry 1====== 00:32:28.591 trtype: tcp 00:32:28.591 adrfam: ipv4 00:32:28.591 subtype: nvme subsystem 00:32:28.591 treq: not specified, sq flow control disable supported 00:32:28.591 portid: 1 00:32:28.591 trsvcid: 4420 00:32:28.591 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:28.591 traddr: 10.0.0.1 00:32:28.591 eflags: none 00:32:28.591 sectype: none 00:32:28.591 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:28.591 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:28.591 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.591 ===================================================== 00:32:28.591 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:28.591 ===================================================== 00:32:28.591 Controller Capabilities/Features 00:32:28.591 ================================ 00:32:28.591 Vendor ID: 0000 00:32:28.591 Subsystem Vendor ID: 0000 00:32:28.591 Serial Number: 502b36b482561b9e4df4 00:32:28.591 Model Number: Linux 00:32:28.591 Firmware Version: 6.7.0-68 00:32:28.591 Recommended Arb Burst: 0 00:32:28.591 IEEE OUI Identifier: 00 00 00 00:32:28.591 Multi-path I/O 00:32:28.591 May have multiple subsystem ports: No 00:32:28.591 May have multiple controllers: No 00:32:28.591 Associated with SR-IOV VF: No 00:32:28.591 Max Data Transfer Size: Unlimited 00:32:28.591 Max Number of Namespaces: 0 00:32:28.591 Max Number of I/O Queues: 1024 00:32:28.591 NVMe Specification Version (VS): 1.3 00:32:28.591 NVMe Specification Version (Identify): 1.3 00:32:28.591 Maximum Queue Entries: 1024 00:32:28.591 Contiguous Queues Required: No 00:32:28.591 Arbitration Mechanisms Supported 00:32:28.591 Weighted Round Robin: Not Supported 00:32:28.591 Vendor Specific: Not Supported 00:32:28.591 Reset Timeout: 7500 ms 00:32:28.591 Doorbell Stride: 4 bytes 00:32:28.591 NVM Subsystem Reset: Not Supported 00:32:28.591 Command Sets Supported 00:32:28.591 NVM Command Set: Supported 00:32:28.591 Boot Partition: Not Supported 00:32:28.591 Memory Page Size Minimum: 4096 bytes 00:32:28.591 Memory Page Size Maximum: 4096 bytes 00:32:28.591 Persistent Memory Region: Not Supported 00:32:28.591 Optional Asynchronous Events Supported 00:32:28.591 Namespace Attribute Notices: Not Supported 00:32:28.591 Firmware Activation Notices: Not Supported 00:32:28.591 ANA Change Notices: Not Supported 00:32:28.591 PLE Aggregate Log Change Notices: Not Supported 00:32:28.591 LBA Status Info Alert Notices: Not Supported 00:32:28.591 EGE Aggregate Log Change Notices: Not Supported 00:32:28.591 Normal NVM Subsystem Shutdown event: Not Supported 00:32:28.591 Zone Descriptor Change Notices: Not Supported 00:32:28.591 Discovery Log Change Notices: Supported 00:32:28.591 Controller Attributes 00:32:28.591 128-bit Host Identifier: Not Supported 00:32:28.591 Non-Operational Permissive Mode: Not Supported 00:32:28.591 NVM Sets: Not Supported 00:32:28.591 Read Recovery Levels: Not Supported 00:32:28.591 Endurance Groups: Not Supported 00:32:28.591 Predictable Latency Mode: Not Supported 00:32:28.591 Traffic Based Keep ALive: Not Supported 00:32:28.591 Namespace Granularity: Not Supported 00:32:28.591 SQ Associations: Not Supported 00:32:28.591 UUID List: Not Supported 00:32:28.591 Multi-Domain Subsystem: Not Supported 00:32:28.591 Fixed Capacity Management: Not Supported 00:32:28.591 Variable Capacity Management: Not Supported 00:32:28.591 Delete Endurance Group: Not Supported 00:32:28.591 Delete NVM Set: Not Supported 00:32:28.591 Extended LBA Formats Supported: Not Supported 00:32:28.591 Flexible Data Placement Supported: Not Supported 00:32:28.591 00:32:28.591 Controller Memory Buffer Support 00:32:28.591 ================================ 00:32:28.591 Supported: No 00:32:28.591 00:32:28.591 Persistent Memory Region Support 00:32:28.591 ================================ 00:32:28.591 Supported: No 00:32:28.591 00:32:28.591 Admin Command Set Attributes 00:32:28.591 ============================ 00:32:28.591 Security Send/Receive: Not Supported 00:32:28.591 Format NVM: Not Supported 00:32:28.591 Firmware Activate/Download: Not Supported 00:32:28.591 Namespace Management: Not Supported 00:32:28.591 Device Self-Test: Not Supported 00:32:28.591 Directives: Not Supported 00:32:28.591 NVMe-MI: Not Supported 00:32:28.591 Virtualization Management: Not Supported 00:32:28.591 Doorbell Buffer Config: Not Supported 00:32:28.591 Get LBA Status Capability: Not Supported 00:32:28.591 Command & Feature Lockdown Capability: Not Supported 00:32:28.591 Abort Command Limit: 1 00:32:28.591 Async Event Request Limit: 1 00:32:28.591 Number of Firmware Slots: N/A 00:32:28.591 Firmware Slot 1 Read-Only: N/A 00:32:28.592 Firmware Activation Without Reset: N/A 00:32:28.592 Multiple Update Detection Support: N/A 00:32:28.592 Firmware Update Granularity: No Information Provided 00:32:28.592 Per-Namespace SMART Log: No 00:32:28.592 Asymmetric Namespace Access Log Page: Not Supported 00:32:28.592 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:28.592 Command Effects Log Page: Not Supported 00:32:28.592 Get Log Page Extended Data: Supported 00:32:28.592 Telemetry Log Pages: Not Supported 00:32:28.592 Persistent Event Log Pages: Not Supported 00:32:28.592 Supported Log Pages Log Page: May Support 00:32:28.592 Commands Supported & Effects Log Page: Not Supported 00:32:28.592 Feature Identifiers & Effects Log Page:May Support 00:32:28.592 NVMe-MI Commands & Effects Log Page: May Support 00:32:28.592 Data Area 4 for Telemetry Log: Not Supported 00:32:28.592 Error Log Page Entries Supported: 1 00:32:28.592 Keep Alive: Not Supported 00:32:28.592 00:32:28.592 NVM Command Set Attributes 00:32:28.592 ========================== 00:32:28.592 Submission Queue Entry Size 00:32:28.592 Max: 1 00:32:28.592 Min: 1 00:32:28.592 Completion Queue Entry Size 00:32:28.592 Max: 1 00:32:28.592 Min: 1 00:32:28.592 Number of Namespaces: 0 00:32:28.592 Compare Command: Not Supported 00:32:28.592 Write Uncorrectable Command: Not Supported 00:32:28.592 Dataset Management Command: Not Supported 00:32:28.592 Write Zeroes Command: Not Supported 00:32:28.592 Set Features Save Field: Not Supported 00:32:28.592 Reservations: Not Supported 00:32:28.592 Timestamp: Not Supported 00:32:28.592 Copy: Not Supported 00:32:28.592 Volatile Write Cache: Not Present 00:32:28.592 Atomic Write Unit (Normal): 1 00:32:28.592 Atomic Write Unit (PFail): 1 00:32:28.592 Atomic Compare & Write Unit: 1 00:32:28.592 Fused Compare & Write: Not Supported 00:32:28.592 Scatter-Gather List 00:32:28.592 SGL Command Set: Supported 00:32:28.592 SGL Keyed: Not Supported 00:32:28.592 SGL Bit Bucket Descriptor: Not Supported 00:32:28.592 SGL Metadata Pointer: Not Supported 00:32:28.592 Oversized SGL: Not Supported 00:32:28.592 SGL Metadata Address: Not Supported 00:32:28.592 SGL Offset: Supported 00:32:28.592 Transport SGL Data Block: Not Supported 00:32:28.592 Replay Protected Memory Block: Not Supported 00:32:28.592 00:32:28.592 Firmware Slot Information 00:32:28.592 ========================= 00:32:28.592 Active slot: 0 00:32:28.592 00:32:28.592 00:32:28.592 Error Log 00:32:28.592 ========= 00:32:28.592 00:32:28.592 Active Namespaces 00:32:28.592 ================= 00:32:28.592 Discovery Log Page 00:32:28.592 ================== 00:32:28.592 Generation Counter: 2 00:32:28.592 Number of Records: 2 00:32:28.592 Record Format: 0 00:32:28.592 00:32:28.592 Discovery Log Entry 0 00:32:28.592 ---------------------- 00:32:28.592 Transport Type: 3 (TCP) 00:32:28.592 Address Family: 1 (IPv4) 00:32:28.592 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:28.592 Entry Flags: 00:32:28.592 Duplicate Returned Information: 0 00:32:28.592 Explicit Persistent Connection Support for Discovery: 0 00:32:28.592 Transport Requirements: 00:32:28.592 Secure Channel: Not Specified 00:32:28.592 Port ID: 1 (0x0001) 00:32:28.592 Controller ID: 65535 (0xffff) 00:32:28.592 Admin Max SQ Size: 32 00:32:28.592 Transport Service Identifier: 4420 00:32:28.592 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:28.592 Transport Address: 10.0.0.1 00:32:28.592 Discovery Log Entry 1 00:32:28.592 ---------------------- 00:32:28.592 Transport Type: 3 (TCP) 00:32:28.592 Address Family: 1 (IPv4) 00:32:28.592 Subsystem Type: 2 (NVM Subsystem) 00:32:28.592 Entry Flags: 00:32:28.592 Duplicate Returned Information: 0 00:32:28.592 Explicit Persistent Connection Support for Discovery: 0 00:32:28.592 Transport Requirements: 00:32:28.592 Secure Channel: Not Specified 00:32:28.592 Port ID: 1 (0x0001) 00:32:28.592 Controller ID: 65535 (0xffff) 00:32:28.592 Admin Max SQ Size: 32 00:32:28.592 Transport Service Identifier: 4420 00:32:28.592 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:28.592 Transport Address: 10.0.0.1 00:32:28.592 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:28.592 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.852 get_feature(0x01) failed 00:32:28.852 get_feature(0x02) failed 00:32:28.852 get_feature(0x04) failed 00:32:28.852 ===================================================== 00:32:28.852 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:28.852 ===================================================== 00:32:28.852 Controller Capabilities/Features 00:32:28.852 ================================ 00:32:28.852 Vendor ID: 0000 00:32:28.852 Subsystem Vendor ID: 0000 00:32:28.852 Serial Number: ed751d2e68dd28157804 00:32:28.852 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:28.852 Firmware Version: 6.7.0-68 00:32:28.852 Recommended Arb Burst: 6 00:32:28.852 IEEE OUI Identifier: 00 00 00 00:32:28.852 Multi-path I/O 00:32:28.852 May have multiple subsystem ports: Yes 00:32:28.852 May have multiple controllers: Yes 00:32:28.852 Associated with SR-IOV VF: No 00:32:28.852 Max Data Transfer Size: Unlimited 00:32:28.852 Max Number of Namespaces: 1024 00:32:28.852 Max Number of I/O Queues: 128 00:32:28.852 NVMe Specification Version (VS): 1.3 00:32:28.852 NVMe Specification Version (Identify): 1.3 00:32:28.852 Maximum Queue Entries: 1024 00:32:28.852 Contiguous Queues Required: No 00:32:28.852 Arbitration Mechanisms Supported 00:32:28.852 Weighted Round Robin: Not Supported 00:32:28.852 Vendor Specific: Not Supported 00:32:28.852 Reset Timeout: 7500 ms 00:32:28.852 Doorbell Stride: 4 bytes 00:32:28.852 NVM Subsystem Reset: Not Supported 00:32:28.852 Command Sets Supported 00:32:28.852 NVM Command Set: Supported 00:32:28.852 Boot Partition: Not Supported 00:32:28.852 Memory Page Size Minimum: 4096 bytes 00:32:28.852 Memory Page Size Maximum: 4096 bytes 00:32:28.852 Persistent Memory Region: Not Supported 00:32:28.852 Optional Asynchronous Events Supported 00:32:28.852 Namespace Attribute Notices: Supported 00:32:28.852 Firmware Activation Notices: Not Supported 00:32:28.852 ANA Change Notices: Supported 00:32:28.852 PLE Aggregate Log Change Notices: Not Supported 00:32:28.852 LBA Status Info Alert Notices: Not Supported 00:32:28.852 EGE Aggregate Log Change Notices: Not Supported 00:32:28.852 Normal NVM Subsystem Shutdown event: Not Supported 00:32:28.852 Zone Descriptor Change Notices: Not Supported 00:32:28.852 Discovery Log Change Notices: Not Supported 00:32:28.852 Controller Attributes 00:32:28.852 128-bit Host Identifier: Supported 00:32:28.852 Non-Operational Permissive Mode: Not Supported 00:32:28.852 NVM Sets: Not Supported 00:32:28.852 Read Recovery Levels: Not Supported 00:32:28.852 Endurance Groups: Not Supported 00:32:28.852 Predictable Latency Mode: Not Supported 00:32:28.852 Traffic Based Keep ALive: Supported 00:32:28.852 Namespace Granularity: Not Supported 00:32:28.852 SQ Associations: Not Supported 00:32:28.852 UUID List: Not Supported 00:32:28.852 Multi-Domain Subsystem: Not Supported 00:32:28.852 Fixed Capacity Management: Not Supported 00:32:28.852 Variable Capacity Management: Not Supported 00:32:28.852 Delete Endurance Group: Not Supported 00:32:28.852 Delete NVM Set: Not Supported 00:32:28.852 Extended LBA Formats Supported: Not Supported 00:32:28.852 Flexible Data Placement Supported: Not Supported 00:32:28.852 00:32:28.852 Controller Memory Buffer Support 00:32:28.852 ================================ 00:32:28.852 Supported: No 00:32:28.852 00:32:28.852 Persistent Memory Region Support 00:32:28.852 ================================ 00:32:28.852 Supported: No 00:32:28.852 00:32:28.852 Admin Command Set Attributes 00:32:28.852 ============================ 00:32:28.852 Security Send/Receive: Not Supported 00:32:28.852 Format NVM: Not Supported 00:32:28.852 Firmware Activate/Download: Not Supported 00:32:28.852 Namespace Management: Not Supported 00:32:28.852 Device Self-Test: Not Supported 00:32:28.852 Directives: Not Supported 00:32:28.852 NVMe-MI: Not Supported 00:32:28.852 Virtualization Management: Not Supported 00:32:28.852 Doorbell Buffer Config: Not Supported 00:32:28.852 Get LBA Status Capability: Not Supported 00:32:28.852 Command & Feature Lockdown Capability: Not Supported 00:32:28.852 Abort Command Limit: 4 00:32:28.852 Async Event Request Limit: 4 00:32:28.852 Number of Firmware Slots: N/A 00:32:28.852 Firmware Slot 1 Read-Only: N/A 00:32:28.852 Firmware Activation Without Reset: N/A 00:32:28.852 Multiple Update Detection Support: N/A 00:32:28.852 Firmware Update Granularity: No Information Provided 00:32:28.852 Per-Namespace SMART Log: Yes 00:32:28.852 Asymmetric Namespace Access Log Page: Supported 00:32:28.852 ANA Transition Time : 10 sec 00:32:28.852 00:32:28.852 Asymmetric Namespace Access Capabilities 00:32:28.852 ANA Optimized State : Supported 00:32:28.852 ANA Non-Optimized State : Supported 00:32:28.852 ANA Inaccessible State : Supported 00:32:28.852 ANA Persistent Loss State : Supported 00:32:28.852 ANA Change State : Supported 00:32:28.852 ANAGRPID is not changed : No 00:32:28.852 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:28.852 00:32:28.852 ANA Group Identifier Maximum : 128 00:32:28.852 Number of ANA Group Identifiers : 128 00:32:28.852 Max Number of Allowed Namespaces : 1024 00:32:28.853 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:28.853 Command Effects Log Page: Supported 00:32:28.853 Get Log Page Extended Data: Supported 00:32:28.853 Telemetry Log Pages: Not Supported 00:32:28.853 Persistent Event Log Pages: Not Supported 00:32:28.853 Supported Log Pages Log Page: May Support 00:32:28.853 Commands Supported & Effects Log Page: Not Supported 00:32:28.853 Feature Identifiers & Effects Log Page:May Support 00:32:28.853 NVMe-MI Commands & Effects Log Page: May Support 00:32:28.853 Data Area 4 for Telemetry Log: Not Supported 00:32:28.853 Error Log Page Entries Supported: 128 00:32:28.853 Keep Alive: Supported 00:32:28.853 Keep Alive Granularity: 1000 ms 00:32:28.853 00:32:28.853 NVM Command Set Attributes 00:32:28.853 ========================== 00:32:28.853 Submission Queue Entry Size 00:32:28.853 Max: 64 00:32:28.853 Min: 64 00:32:28.853 Completion Queue Entry Size 00:32:28.853 Max: 16 00:32:28.853 Min: 16 00:32:28.853 Number of Namespaces: 1024 00:32:28.853 Compare Command: Not Supported 00:32:28.853 Write Uncorrectable Command: Not Supported 00:32:28.853 Dataset Management Command: Supported 00:32:28.853 Write Zeroes Command: Supported 00:32:28.853 Set Features Save Field: Not Supported 00:32:28.853 Reservations: Not Supported 00:32:28.853 Timestamp: Not Supported 00:32:28.853 Copy: Not Supported 00:32:28.853 Volatile Write Cache: Present 00:32:28.853 Atomic Write Unit (Normal): 1 00:32:28.853 Atomic Write Unit (PFail): 1 00:32:28.853 Atomic Compare & Write Unit: 1 00:32:28.853 Fused Compare & Write: Not Supported 00:32:28.853 Scatter-Gather List 00:32:28.853 SGL Command Set: Supported 00:32:28.853 SGL Keyed: Not Supported 00:32:28.853 SGL Bit Bucket Descriptor: Not Supported 00:32:28.853 SGL Metadata Pointer: Not Supported 00:32:28.853 Oversized SGL: Not Supported 00:32:28.853 SGL Metadata Address: Not Supported 00:32:28.853 SGL Offset: Supported 00:32:28.853 Transport SGL Data Block: Not Supported 00:32:28.853 Replay Protected Memory Block: Not Supported 00:32:28.853 00:32:28.853 Firmware Slot Information 00:32:28.853 ========================= 00:32:28.853 Active slot: 0 00:32:28.853 00:32:28.853 Asymmetric Namespace Access 00:32:28.853 =========================== 00:32:28.853 Change Count : 0 00:32:28.853 Number of ANA Group Descriptors : 1 00:32:28.853 ANA Group Descriptor : 0 00:32:28.853 ANA Group ID : 1 00:32:28.853 Number of NSID Values : 1 00:32:28.853 Change Count : 0 00:32:28.853 ANA State : 1 00:32:28.853 Namespace Identifier : 1 00:32:28.853 00:32:28.853 Commands Supported and Effects 00:32:28.853 ============================== 00:32:28.853 Admin Commands 00:32:28.853 -------------- 00:32:28.853 Get Log Page (02h): Supported 00:32:28.853 Identify (06h): Supported 00:32:28.853 Abort (08h): Supported 00:32:28.853 Set Features (09h): Supported 00:32:28.853 Get Features (0Ah): Supported 00:32:28.853 Asynchronous Event Request (0Ch): Supported 00:32:28.853 Keep Alive (18h): Supported 00:32:28.853 I/O Commands 00:32:28.853 ------------ 00:32:28.853 Flush (00h): Supported 00:32:28.853 Write (01h): Supported LBA-Change 00:32:28.853 Read (02h): Supported 00:32:28.853 Write Zeroes (08h): Supported LBA-Change 00:32:28.853 Dataset Management (09h): Supported 00:32:28.853 00:32:28.853 Error Log 00:32:28.853 ========= 00:32:28.853 Entry: 0 00:32:28.853 Error Count: 0x3 00:32:28.853 Submission Queue Id: 0x0 00:32:28.853 Command Id: 0x5 00:32:28.853 Phase Bit: 0 00:32:28.853 Status Code: 0x2 00:32:28.853 Status Code Type: 0x0 00:32:28.853 Do Not Retry: 1 00:32:28.853 Error Location: 0x28 00:32:28.853 LBA: 0x0 00:32:28.853 Namespace: 0x0 00:32:28.853 Vendor Log Page: 0x0 00:32:28.853 ----------- 00:32:28.853 Entry: 1 00:32:28.853 Error Count: 0x2 00:32:28.853 Submission Queue Id: 0x0 00:32:28.853 Command Id: 0x5 00:32:28.853 Phase Bit: 0 00:32:28.853 Status Code: 0x2 00:32:28.853 Status Code Type: 0x0 00:32:28.853 Do Not Retry: 1 00:32:28.853 Error Location: 0x28 00:32:28.853 LBA: 0x0 00:32:28.853 Namespace: 0x0 00:32:28.853 Vendor Log Page: 0x0 00:32:28.853 ----------- 00:32:28.853 Entry: 2 00:32:28.853 Error Count: 0x1 00:32:28.853 Submission Queue Id: 0x0 00:32:28.853 Command Id: 0x4 00:32:28.853 Phase Bit: 0 00:32:28.853 Status Code: 0x2 00:32:28.853 Status Code Type: 0x0 00:32:28.853 Do Not Retry: 1 00:32:28.853 Error Location: 0x28 00:32:28.853 LBA: 0x0 00:32:28.853 Namespace: 0x0 00:32:28.853 Vendor Log Page: 0x0 00:32:28.853 00:32:28.853 Number of Queues 00:32:28.853 ================ 00:32:28.853 Number of I/O Submission Queues: 128 00:32:28.853 Number of I/O Completion Queues: 128 00:32:28.853 00:32:28.853 ZNS Specific Controller Data 00:32:28.853 ============================ 00:32:28.853 Zone Append Size Limit: 0 00:32:28.853 00:32:28.853 00:32:28.853 Active Namespaces 00:32:28.853 ================= 00:32:28.853 get_feature(0x05) failed 00:32:28.853 Namespace ID:1 00:32:28.853 Command Set Identifier: NVM (00h) 00:32:28.853 Deallocate: Supported 00:32:28.853 Deallocated/Unwritten Error: Not Supported 00:32:28.853 Deallocated Read Value: Unknown 00:32:28.853 Deallocate in Write Zeroes: Not Supported 00:32:28.853 Deallocated Guard Field: 0xFFFF 00:32:28.853 Flush: Supported 00:32:28.853 Reservation: Not Supported 00:32:28.853 Namespace Sharing Capabilities: Multiple Controllers 00:32:28.853 Size (in LBAs): 1953525168 (931GiB) 00:32:28.853 Capacity (in LBAs): 1953525168 (931GiB) 00:32:28.853 Utilization (in LBAs): 1953525168 (931GiB) 00:32:28.853 UUID: ab0e37b2-ca80-433b-b5d0-06f43e8b04b1 00:32:28.853 Thin Provisioning: Not Supported 00:32:28.853 Per-NS Atomic Units: Yes 00:32:28.853 Atomic Boundary Size (Normal): 0 00:32:28.853 Atomic Boundary Size (PFail): 0 00:32:28.853 Atomic Boundary Offset: 0 00:32:28.853 NGUID/EUI64 Never Reused: No 00:32:28.853 ANA group ID: 1 00:32:28.853 Namespace Write Protected: No 00:32:28.853 Number of LBA Formats: 1 00:32:28.853 Current LBA Format: LBA Format #00 00:32:28.853 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:28.853 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:28.853 rmmod nvme_tcp 00:32:28.853 rmmod nvme_fabrics 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:28.853 14:08:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.760 14:08:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:30.760 14:08:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:30.760 14:08:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:30.760 14:08:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:30.760 14:08:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:30.760 14:08:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:30.760 14:08:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:30.760 14:08:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:30.760 14:08:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:30.760 14:08:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:30.760 14:08:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:32.134 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:32.134 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:32.134 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:32.134 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:32.134 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:32.134 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:32.134 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:32.134 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:32.134 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:32.134 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:32.134 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:32.134 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:32.134 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:32.134 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:32.134 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:32.134 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:33.072 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:33.331 00:32:33.331 real 0m9.212s 00:32:33.331 user 0m2.001s 00:32:33.331 sys 0m3.141s 00:32:33.331 14:08:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:33.331 14:08:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:33.331 ************************************ 00:32:33.331 END TEST nvmf_identify_kernel_target 00:32:33.331 ************************************ 00:32:33.331 14:08:11 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:33.331 14:08:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:33.331 14:08:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:33.331 14:08:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.331 ************************************ 00:32:33.331 START TEST nvmf_auth_host 00:32:33.331 ************************************ 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:33.331 * Looking for test storage... 00:32:33.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.331 14:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:33.332 14:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.332 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:33.332 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:33.332 14:08:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:33.332 14:08:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:35.237 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:35.237 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:35.237 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:35.237 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:35.237 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:35.238 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:35.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:35.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:32:35.494 00:32:35.494 --- 10.0.0.2 ping statistics --- 00:32:35.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.494 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:35.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:35.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:32:35.494 00:32:35.494 --- 10.0.0.1 ping statistics --- 00:32:35.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.494 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1593178 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1593178 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1593178 ']' 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:35.494 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6abc3b90c10a624c85551170bfcecc60 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.54U 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6abc3b90c10a624c85551170bfcecc60 0 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6abc3b90c10a624c85551170bfcecc60 0 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6abc3b90c10a624c85551170bfcecc60 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.54U 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.54U 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.54U 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=49caaddab2b010f49da5887c2361b5d68813390a39589a53f44dff76b6f6f193 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.a2n 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 49caaddab2b010f49da5887c2361b5d68813390a39589a53f44dff76b6f6f193 3 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 49caaddab2b010f49da5887c2361b5d68813390a39589a53f44dff76b6f6f193 3 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=49caaddab2b010f49da5887c2361b5d68813390a39589a53f44dff76b6f6f193 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.a2n 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.a2n 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.a2n 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cebf135aa680b05e8c6c4bbe97bc4146f758d5ac137c5da5 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.z9j 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cebf135aa680b05e8c6c4bbe97bc4146f758d5ac137c5da5 0 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cebf135aa680b05e8c6c4bbe97bc4146f758d5ac137c5da5 0 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cebf135aa680b05e8c6c4bbe97bc4146f758d5ac137c5da5 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:35.752 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.z9j 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.z9j 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.z9j 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=256475d23011e1c8185b495958feafdd6139b9724ee45ba5 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6F2 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 256475d23011e1c8185b495958feafdd6139b9724ee45ba5 2 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 256475d23011e1c8185b495958feafdd6139b9724ee45ba5 2 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=256475d23011e1c8185b495958feafdd6139b9724ee45ba5 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6F2 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6F2 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6F2 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fd8f92dd0c32c96cbff20c01a09a4f78 00:32:36.010 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.tod 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fd8f92dd0c32c96cbff20c01a09a4f78 1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fd8f92dd0c32c96cbff20c01a09a4f78 1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fd8f92dd0c32c96cbff20c01a09a4f78 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.tod 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.tod 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.tod 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5bf426a3ea6259891552f67604c9c88d 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.xU1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5bf426a3ea6259891552f67604c9c88d 1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5bf426a3ea6259891552f67604c9c88d 1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5bf426a3ea6259891552f67604c9c88d 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.xU1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.xU1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.xU1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9ccffb8edb62bbfba767d03f987ef1dac4d77a97d2ad1570 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Dm6 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9ccffb8edb62bbfba767d03f987ef1dac4d77a97d2ad1570 2 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9ccffb8edb62bbfba767d03f987ef1dac4d77a97d2ad1570 2 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9ccffb8edb62bbfba767d03f987ef1dac4d77a97d2ad1570 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Dm6 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Dm6 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Dm6 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7109aa4c809cacd088fa6a2354e1ca68 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.6nO 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7109aa4c809cacd088fa6a2354e1ca68 0 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7109aa4c809cacd088fa6a2354e1ca68 0 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7109aa4c809cacd088fa6a2354e1ca68 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:36.011 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.6nO 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.6nO 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6nO 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fbdab653c35942f7cb66df3398cd23871cb0689475762ae2c58ddece21c47c5d 00:32:36.269 14:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.IDr 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fbdab653c35942f7cb66df3398cd23871cb0689475762ae2c58ddece21c47c5d 3 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fbdab653c35942f7cb66df3398cd23871cb0689475762ae2c58ddece21c47c5d 3 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fbdab653c35942f7cb66df3398cd23871cb0689475762ae2c58ddece21c47c5d 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.IDr 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.IDr 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.IDr 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1593178 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1593178 ']' 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:36.269 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.54U 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.a2n ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.a2n 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.z9j 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6F2 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6F2 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.tod 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.xU1 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xU1 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Dm6 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6nO ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6nO 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.IDr 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:36.528 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:36.529 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:36.529 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:36.529 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:36.529 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:36.529 14:08:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:37.491 Waiting for block devices as requested 00:32:37.748 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:37.748 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:38.005 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:38.005 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:38.005 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:38.005 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:38.261 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:38.261 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:38.261 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:38.261 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:38.517 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:38.517 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:38.517 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:38.773 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:38.773 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:38.773 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:38.773 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:39.338 No valid GPT data, bailing 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:39.338 00:32:39.338 Discovery Log Number of Records 2, Generation counter 2 00:32:39.338 =====Discovery Log Entry 0====== 00:32:39.338 trtype: tcp 00:32:39.338 adrfam: ipv4 00:32:39.338 subtype: current discovery subsystem 00:32:39.338 treq: not specified, sq flow control disable supported 00:32:39.338 portid: 1 00:32:39.338 trsvcid: 4420 00:32:39.338 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:39.338 traddr: 10.0.0.1 00:32:39.338 eflags: none 00:32:39.338 sectype: none 00:32:39.338 =====Discovery Log Entry 1====== 00:32:39.338 trtype: tcp 00:32:39.338 adrfam: ipv4 00:32:39.338 subtype: nvme subsystem 00:32:39.338 treq: not specified, sq flow control disable supported 00:32:39.338 portid: 1 00:32:39.338 trsvcid: 4420 00:32:39.338 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:39.338 traddr: 10.0.0.1 00:32:39.338 eflags: none 00:32:39.338 sectype: none 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:39.338 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.339 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.595 nvme0n1 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.595 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.596 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.853 nvme0n1 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.853 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.111 nvme0n1 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.111 14:08:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.111 nvme0n1 00:32:40.111 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.111 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.111 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.111 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.111 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.368 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.368 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.368 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.368 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.368 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.368 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.368 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.368 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:40.368 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.368 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.369 nvme0n1 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.369 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.626 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.627 nvme0n1 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.627 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.884 nvme0n1 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.884 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.143 14:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.143 nvme0n1 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:41.143 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.402 nvme0n1 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.402 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.660 nvme0n1 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.660 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.661 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.919 nvme0n1 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.919 14:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.484 nvme0n1 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:42.484 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.485 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.742 nvme0n1 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.742 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.000 nvme0n1 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.000 14:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.564 nvme0n1 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.564 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.821 nvme0n1 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.821 14:08:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.387 nvme0n1 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.387 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.388 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:44.388 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.388 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.954 nvme0n1 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.954 14:08:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.520 nvme0n1 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.520 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.084 nvme0n1 00:32:46.084 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.084 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.084 14:08:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.084 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.084 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.084 14:08:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.084 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.649 nvme0n1 00:32:46.649 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.649 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.649 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.649 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.649 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.649 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.649 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.649 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.649 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.649 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.907 14:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.840 nvme0n1 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.840 14:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.773 nvme0n1 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.773 14:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.706 nvme0n1 00:32:49.706 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.706 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.706 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.706 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.706 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.706 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.964 14:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.899 nvme0n1 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.899 14:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.834 nvme0n1 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.834 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.092 nvme0n1 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.092 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.093 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.093 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.093 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.093 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.093 14:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.093 14:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:52.093 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.093 14:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.093 nvme0n1 00:32:52.093 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.093 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.093 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.093 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.093 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.093 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.351 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.352 nvme0n1 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.352 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.632 nvme0n1 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.632 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.633 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.961 nvme0n1 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.961 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.220 nvme0n1 00:32:53.220 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.220 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.220 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.220 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.220 14:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.220 14:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:53.220 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.221 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.478 nvme0n1 00:32:53.478 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.478 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.478 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.478 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.478 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.478 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.478 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.478 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.479 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.736 nvme0n1 00:32:53.736 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.736 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.736 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.736 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.736 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.736 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.736 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.736 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.737 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.995 nvme0n1 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.995 14:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.253 nvme0n1 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.253 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.254 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.512 nvme0n1 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.512 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.078 nvme0n1 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.078 14:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.336 nvme0n1 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.336 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.593 nvme0n1 00:32:55.593 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.593 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.593 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.593 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.593 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.593 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.593 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.593 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.593 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.593 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.594 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.157 nvme0n1 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.157 14:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.721 nvme0n1 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.721 14:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.286 nvme0n1 00:32:57.286 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.286 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.286 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.286 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.286 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.286 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.286 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.286 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.286 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.287 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.851 nvme0n1 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.851 14:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.416 nvme0n1 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.416 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.981 nvme0n1 00:32:58.981 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.981 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.981 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.981 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.981 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.981 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:59.239 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.240 14:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.240 14:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.240 14:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.240 14:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.173 nvme0n1 00:33:00.173 14:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.173 14:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.173 14:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.173 14:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.173 14:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.173 14:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.174 14:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.106 nvme0n1 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.106 14:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.478 nvme0n1 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.478 14:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.411 nvme0n1 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:03.411 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.412 14:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.343 nvme0n1 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.343 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.344 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.344 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.344 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.344 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:04.344 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.344 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.601 nvme0n1 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.601 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.858 nvme0n1 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:04.858 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.859 nvme0n1 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.859 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.117 14:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.117 nvme0n1 00:33:05.117 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.117 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.117 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.117 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.117 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.117 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.117 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.117 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.117 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.117 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.375 nvme0n1 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.375 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.376 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.633 nvme0n1 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:05.633 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.634 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.892 nvme0n1 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.892 14:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.151 nvme0n1 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.151 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.410 nvme0n1 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.410 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.669 nvme0n1 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.669 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.237 nvme0n1 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.237 14:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.237 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.496 nvme0n1 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.496 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.755 nvme0n1 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.755 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.015 nvme0n1 00:33:08.015 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.015 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.015 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.015 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.015 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.015 14:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.273 14:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.273 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.532 nvme0n1 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.532 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.130 nvme0n1 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.130 14:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.130 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.701 nvme0n1 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:09.701 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.702 14:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.266 nvme0n1 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.266 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.831 nvme0n1 00:33:10.831 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.831 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.831 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.831 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.831 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.831 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.089 14:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.654 nvme0n1 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmFiYzNiOTBjMTBhNjI0Yzg1NTUxMTcwYmZjZWNjNjCEtisn: 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: ]] 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDljYWFkZGFiMmIwMTBmNDlkYTU4ODdjMjM2MWI1ZDY4ODEzMzkwYTM5NTg5YTUzZjQ0ZGZmNzZiNmY2ZjE5M0rP8Us=: 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.654 14:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.587 nvme0n1 00:33:12.587 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.588 14:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.520 nvme0n1 00:33:13.520 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.520 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.520 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.520 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.520 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.520 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmQ4ZjkyZGQwYzMyYzk2Y2JmZjIwYzAxYTA5YTRmNzi6Vlok: 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: ]] 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWJmNDI2YTNlYTYyNTk4OTE1NTJmNjc2MDRjOWM4OGQUxhCB: 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.779 14:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.714 nvme0n1 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:14.714 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNjZmZiOGVkYjYyYmJmYmE3NjdkMDNmOTg3ZWYxZGFjNGQ3N2E5N2QyYWQxNTcwzcPutg==: 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: ]] 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzEwOWFhNGM4MDljYWNkMDg4ZmE2YTIzNTRlMWNhNjiThplF: 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.715 14:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.644 nvme0n1 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmJkYWI2NTNjMzU5NDJmN2NiNjZkZjMzOThjZDIzODcxY2IwNjg5NDc1NzYyYWUyYzU4ZGRlY2UyMWM0N2M1ZK78qDg=: 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.645 14:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.576 nvme0n1 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ViZjEzNWFhNjgwYjA1ZThjNmM0YmJlOTdiYzQxNDZmNzU4ZDVhYzEzN2M1ZGE1hys5GQ==: 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: ]] 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjU2NDc1ZDIzMDExZTFjODE4NWI0OTU5NThmZWFmZGQ2MTM5Yjk3MjRlZTQ1YmE1z7kyuQ==: 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.576 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.834 request: 00:33:16.834 { 00:33:16.834 "name": "nvme0", 00:33:16.834 "trtype": "tcp", 00:33:16.834 "traddr": "10.0.0.1", 00:33:16.834 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:16.834 "adrfam": "ipv4", 00:33:16.834 "trsvcid": "4420", 00:33:16.834 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:16.834 "method": "bdev_nvme_attach_controller", 00:33:16.834 "req_id": 1 00:33:16.834 } 00:33:16.834 Got JSON-RPC error response 00:33:16.834 response: 00:33:16.834 { 00:33:16.834 "code": -5, 00:33:16.834 "message": "Input/output error" 00:33:16.834 } 00:33:16.834 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:16.834 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:16.834 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.835 request: 00:33:16.835 { 00:33:16.835 "name": "nvme0", 00:33:16.835 "trtype": "tcp", 00:33:16.835 "traddr": "10.0.0.1", 00:33:16.835 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:16.835 "adrfam": "ipv4", 00:33:16.835 "trsvcid": "4420", 00:33:16.835 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:16.835 "dhchap_key": "key2", 00:33:16.835 "method": "bdev_nvme_attach_controller", 00:33:16.835 "req_id": 1 00:33:16.835 } 00:33:16.835 Got JSON-RPC error response 00:33:16.835 response: 00:33:16.835 { 00:33:16.835 "code": -5, 00:33:16.835 "message": "Input/output error" 00:33:16.835 } 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.835 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.092 request: 00:33:17.092 { 00:33:17.092 "name": "nvme0", 00:33:17.092 "trtype": "tcp", 00:33:17.092 "traddr": "10.0.0.1", 00:33:17.092 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:17.092 "adrfam": "ipv4", 00:33:17.092 "trsvcid": "4420", 00:33:17.092 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:17.092 "dhchap_key": "key1", 00:33:17.092 "dhchap_ctrlr_key": "ckey2", 00:33:17.092 "method": "bdev_nvme_attach_controller", 00:33:17.092 "req_id": 1 00:33:17.092 } 00:33:17.092 Got JSON-RPC error response 00:33:17.092 response: 00:33:17.092 { 00:33:17.092 "code": -5, 00:33:17.092 "message": "Input/output error" 00:33:17.092 } 00:33:17.092 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:17.092 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:17.092 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:17.093 rmmod nvme_tcp 00:33:17.093 rmmod nvme_fabrics 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1593178 ']' 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1593178 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 1593178 ']' 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 1593178 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1593178 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1593178' 00:33:17.093 killing process with pid 1593178 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 1593178 00:33:17.093 14:08:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 1593178 00:33:17.351 14:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:17.351 14:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:17.351 14:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:17.351 14:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:17.351 14:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:17.351 14:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.351 14:08:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:17.351 14:08:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:19.253 14:08:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:19.511 14:08:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:20.443 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:20.444 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:20.444 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:20.444 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:20.444 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:20.444 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:20.444 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:20.444 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:20.701 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:20.701 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:20.701 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:20.701 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:20.701 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:20.701 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:20.701 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:20.701 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:21.636 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:21.636 14:08:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.54U /tmp/spdk.key-null.z9j /tmp/spdk.key-sha256.tod /tmp/spdk.key-sha384.Dm6 /tmp/spdk.key-sha512.IDr /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:21.636 14:08:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:23.013 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:23.013 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:23.013 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:23.013 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:23.013 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:23.013 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:23.013 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:23.013 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:23.013 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:23.013 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:23.013 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:23.013 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:23.013 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:23.013 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:23.013 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:23.013 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:23.013 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:23.013 00:33:23.013 real 0m49.691s 00:33:23.013 user 0m47.382s 00:33:23.013 sys 0m5.719s 00:33:23.013 14:09:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:23.013 14:09:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.013 ************************************ 00:33:23.013 END TEST nvmf_auth_host 00:33:23.014 ************************************ 00:33:23.014 14:09:00 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:23.014 14:09:00 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:23.014 14:09:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:23.014 14:09:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:23.014 14:09:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.014 ************************************ 00:33:23.014 START TEST nvmf_digest 00:33:23.014 ************************************ 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:23.014 * Looking for test storage... 00:33:23.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:23.014 14:09:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:24.917 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:24.917 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:24.917 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:24.917 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:24.917 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:24.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:33:24.918 00:33:24.918 --- 10.0.0.2 ping statistics --- 00:33:24.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.918 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:24.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:33:24.918 00:33:24.918 --- 10.0.0.1 ping statistics --- 00:33:24.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.918 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:24.918 14:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:25.176 ************************************ 00:33:25.176 START TEST nvmf_digest_clean 00:33:25.176 ************************************ 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1602725 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1602725 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1602725 ']' 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:25.176 14:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:25.176 [2024-07-14 14:09:02.980221] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:25.176 [2024-07-14 14:09:02.980306] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.176 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.176 [2024-07-14 14:09:03.043497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.176 [2024-07-14 14:09:03.130176] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.176 [2024-07-14 14:09:03.130249] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.176 [2024-07-14 14:09:03.130263] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.176 [2024-07-14 14:09:03.130274] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.176 [2024-07-14 14:09:03.130291] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.176 [2024-07-14 14:09:03.130317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:25.434 null0 00:33:25.434 [2024-07-14 14:09:03.369813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.434 [2024-07-14 14:09:03.394068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1602745 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1602745 /var/tmp/bperf.sock 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1602745 ']' 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:25.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:25.434 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:25.691 [2024-07-14 14:09:03.444729] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:25.691 [2024-07-14 14:09:03.444803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602745 ] 00:33:25.691 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.691 [2024-07-14 14:09:03.504679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.691 [2024-07-14 14:09:03.592515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.691 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:25.691 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:25.691 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:25.691 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:25.691 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:26.255 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.255 14:09:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.512 nvme0n1 00:33:26.512 14:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:26.512 14:09:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:26.512 Running I/O for 2 seconds... 00:33:29.065 00:33:29.065 Latency(us) 00:33:29.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.065 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:29.065 nvme0n1 : 2.01 17907.08 69.95 0.00 0.00 7138.54 3665.16 17282.09 00:33:29.065 =================================================================================================================== 00:33:29.065 Total : 17907.08 69.95 0.00 0.00 7138.54 3665.16 17282.09 00:33:29.065 0 00:33:29.065 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:29.065 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:29.065 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:29.065 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:29.065 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:29.065 | select(.opcode=="crc32c") 00:33:29.065 | "\(.module_name) \(.executed)"' 00:33:29.065 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:29.065 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:29.065 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:29.065 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:29.065 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1602745 00:33:29.066 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1602745 ']' 00:33:29.066 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1602745 00:33:29.066 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:29.066 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:29.066 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1602745 00:33:29.066 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:29.066 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:29.066 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1602745' 00:33:29.066 killing process with pid 1602745 00:33:29.066 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1602745 00:33:29.066 Received shutdown signal, test time was about 2.000000 seconds 00:33:29.066 00:33:29.066 Latency(us) 00:33:29.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.066 =================================================================================================================== 00:33:29.066 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:29.066 14:09:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1602745 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1603173 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1603173 /var/tmp/bperf.sock 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1603173 ']' 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:29.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:29.066 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:29.323 [2024-07-14 14:09:07.067521] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:29.323 [2024-07-14 14:09:07.067597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603173 ] 00:33:29.323 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:29.323 Zero copy mechanism will not be used. 00:33:29.323 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.323 [2024-07-14 14:09:07.126926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.323 [2024-07-14 14:09:07.213718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.323 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:29.323 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:29.323 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:29.323 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:29.323 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:29.888 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.888 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:30.146 nvme0n1 00:33:30.146 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:30.146 14:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:30.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:30.146 Zero copy mechanism will not be used. 00:33:30.146 Running I/O for 2 seconds... 00:33:32.678 00:33:32.678 Latency(us) 00:33:32.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.678 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:32.678 nvme0n1 : 2.00 5800.36 725.05 0.00 0.00 2753.99 621.99 9077.95 00:33:32.678 =================================================================================================================== 00:33:32.678 Total : 5800.36 725.05 0.00 0.00 2753.99 621.99 9077.95 00:33:32.678 0 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:32.678 | select(.opcode=="crc32c") 00:33:32.678 | "\(.module_name) \(.executed)"' 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1603173 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1603173 ']' 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1603173 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1603173 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1603173' 00:33:32.678 killing process with pid 1603173 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1603173 00:33:32.678 Received shutdown signal, test time was about 2.000000 seconds 00:33:32.678 00:33:32.678 Latency(us) 00:33:32.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.678 =================================================================================================================== 00:33:32.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1603173 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1604086 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1604086 /var/tmp/bperf.sock 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1604086 ']' 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:32.678 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:32.936 [2024-07-14 14:09:10.672749] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:32.936 [2024-07-14 14:09:10.672824] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604086 ] 00:33:32.936 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.936 [2024-07-14 14:09:10.734884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.936 [2024-07-14 14:09:10.825140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.936 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:32.936 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:32.936 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:32.936 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:32.936 14:09:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:33.500 14:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.500 14:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.756 nvme0n1 00:33:33.756 14:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:33.756 14:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.756 Running I/O for 2 seconds... 00:33:36.279 00:33:36.279 Latency(us) 00:33:36.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.279 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:36.279 nvme0n1 : 2.00 20423.09 79.78 0.00 0.00 6256.78 3203.98 10874.12 00:33:36.279 =================================================================================================================== 00:33:36.279 Total : 20423.09 79.78 0.00 0.00 6256.78 3203.98 10874.12 00:33:36.279 0 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:36.279 | select(.opcode=="crc32c") 00:33:36.279 | "\(.module_name) \(.executed)"' 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1604086 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1604086 ']' 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1604086 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1604086 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1604086' 00:33:36.279 killing process with pid 1604086 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1604086 00:33:36.279 Received shutdown signal, test time was about 2.000000 seconds 00:33:36.279 00:33:36.279 Latency(us) 00:33:36.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.279 =================================================================================================================== 00:33:36.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:36.279 14:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1604086 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1604533 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1604533 /var/tmp/bperf.sock 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1604533 ']' 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:36.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:36.279 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:36.279 [2024-07-14 14:09:14.231067] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:36.279 [2024-07-14 14:09:14.231156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604533 ] 00:33:36.279 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:36.279 Zero copy mechanism will not be used. 00:33:36.536 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.537 [2024-07-14 14:09:14.298429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.537 [2024-07-14 14:09:14.395027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.537 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:36.537 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:36.537 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:36.537 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:36.537 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:37.102 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.102 14:09:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.360 nvme0n1 00:33:37.360 14:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:37.360 14:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:37.360 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:37.360 Zero copy mechanism will not be used. 00:33:37.360 Running I/O for 2 seconds... 00:33:39.887 00:33:39.887 Latency(us) 00:33:39.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.887 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:39.887 nvme0n1 : 2.00 5646.26 705.78 0.00 0.00 2825.48 2184.53 5849.69 00:33:39.887 =================================================================================================================== 00:33:39.887 Total : 5646.26 705.78 0.00 0.00 2825.48 2184.53 5849.69 00:33:39.887 0 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:39.887 | select(.opcode=="crc32c") 00:33:39.887 | "\(.module_name) \(.executed)"' 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1604533 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1604533 ']' 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1604533 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1604533 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1604533' 00:33:39.887 killing process with pid 1604533 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1604533 00:33:39.887 Received shutdown signal, test time was about 2.000000 seconds 00:33:39.887 00:33:39.887 Latency(us) 00:33:39.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.887 =================================================================================================================== 00:33:39.887 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1604533 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1602725 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1602725 ']' 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1602725 00:33:39.887 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:40.145 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:40.145 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1602725 00:33:40.145 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:40.145 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:40.145 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1602725' 00:33:40.145 killing process with pid 1602725 00:33:40.145 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1602725 00:33:40.145 14:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1602725 00:33:40.145 00:33:40.145 real 0m15.185s 00:33:40.145 user 0m29.965s 00:33:40.145 sys 0m4.335s 00:33:40.145 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:40.145 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.145 ************************************ 00:33:40.145 END TEST nvmf_digest_clean 00:33:40.145 ************************************ 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:40.403 ************************************ 00:33:40.403 START TEST nvmf_digest_error 00:33:40.403 ************************************ 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1605045 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1605045 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1605045 ']' 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:40.403 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.403 [2024-07-14 14:09:18.221079] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:40.403 [2024-07-14 14:09:18.221181] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.403 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.403 [2024-07-14 14:09:18.284439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.403 [2024-07-14 14:09:18.369745] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.403 [2024-07-14 14:09:18.369797] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.403 [2024-07-14 14:09:18.369820] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.403 [2024-07-14 14:09:18.369831] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.403 [2024-07-14 14:09:18.369841] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.403 [2024-07-14 14:09:18.369865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.660 [2024-07-14 14:09:18.454478] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.660 null0 00:33:40.660 [2024-07-14 14:09:18.570146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.660 [2024-07-14 14:09:18.594385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1605064 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1605064 /var/tmp/bperf.sock 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1605064 ']' 00:33:40.660 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:40.661 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:40.661 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:40.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:40.661 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:40.661 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.661 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:40.917 [2024-07-14 14:09:18.644787] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:40.917 [2024-07-14 14:09:18.644873] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605064 ] 00:33:40.917 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.917 [2024-07-14 14:09:18.708454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.917 [2024-07-14 14:09:18.798948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.174 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:41.174 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:41.174 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:41.174 14:09:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:41.431 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:41.431 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.431 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.431 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.432 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.432 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.688 nvme0n1 00:33:41.688 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:41.688 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.688 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.689 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.689 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:41.689 14:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:41.689 Running I/O for 2 seconds... 00:33:41.946 [2024-07-14 14:09:19.691484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.946 [2024-07-14 14:09:19.691538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.946 [2024-07-14 14:09:19.691561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.946 [2024-07-14 14:09:19.709396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.946 [2024-07-14 14:09:19.709434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.946 [2024-07-14 14:09:19.709454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.946 [2024-07-14 14:09:19.723754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.946 [2024-07-14 14:09:19.723791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.946 [2024-07-14 14:09:19.723810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.946 [2024-07-14 14:09:19.735571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.946 [2024-07-14 14:09:19.735607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.946 [2024-07-14 14:09:19.735627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.752308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.752343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.752362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.764140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.764177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.764209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.781442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.781478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.781499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.797592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.797627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.797647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.810416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.810452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.810471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.826377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.826411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.826431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.839660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.839694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.839713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.852147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.852195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.852213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.865744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.865779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.865799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.878790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.878825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.878844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.893673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.893707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.893726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.909102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.909134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.909151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:41.947 [2024-07-14 14:09:19.921164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:41.947 [2024-07-14 14:09:19.921219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.947 [2024-07-14 14:09:19.921236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:19.936945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:19.936992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:19.937009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:19.949047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:19.949076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:19.949092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:19.965119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:19.965151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:19.965183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:19.976523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:19.976551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:19.976580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:19.991603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:19.991632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:19.991663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.003144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.003191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.003215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.016386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.016421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.016438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.030392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.030426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.030445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.047147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.047184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.047203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.062239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.062274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.062292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.074543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.074577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.074610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.089725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.089758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.089777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.100813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.100843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.100882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.115032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.115062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.115079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.130185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.130226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.130258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.145590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.145620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.145637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.157014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.157042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.157059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.206 [2024-07-14 14:09:20.173515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.206 [2024-07-14 14:09:20.173545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.206 [2024-07-14 14:09:20.173562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.465 [2024-07-14 14:09:20.188885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.465 [2024-07-14 14:09:20.188915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.465 [2024-07-14 14:09:20.188932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.465 [2024-07-14 14:09:20.200561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.465 [2024-07-14 14:09:20.200589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.465 [2024-07-14 14:09:20.200620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.465 [2024-07-14 14:09:20.217218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.465 [2024-07-14 14:09:20.217262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.465 [2024-07-14 14:09:20.217277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.465 [2024-07-14 14:09:20.233194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.465 [2024-07-14 14:09:20.233238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.465 [2024-07-14 14:09:20.233254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.465 [2024-07-14 14:09:20.248280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.465 [2024-07-14 14:09:20.248311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.465 [2024-07-14 14:09:20.248327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.465 [2024-07-14 14:09:20.259742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.465 [2024-07-14 14:09:20.259784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.465 [2024-07-14 14:09:20.259799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.465 [2024-07-14 14:09:20.274159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.465 [2024-07-14 14:09:20.274190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.465 [2024-07-14 14:09:20.274207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.465 [2024-07-14 14:09:20.289046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.289076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.289093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.466 [2024-07-14 14:09:20.300395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.300423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.300454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.466 [2024-07-14 14:09:20.316704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.316731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.316761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.466 [2024-07-14 14:09:20.327748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.327791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.327806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.466 [2024-07-14 14:09:20.343915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.343945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.343961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.466 [2024-07-14 14:09:20.359646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.359675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.359706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.466 [2024-07-14 14:09:20.376285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.376319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.376350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.466 [2024-07-14 14:09:20.390838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.390889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.390908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.466 [2024-07-14 14:09:20.406792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.406835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.406851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.466 [2024-07-14 14:09:20.419969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.420004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.420020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.466 [2024-07-14 14:09:20.430172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.430215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.430231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.466 [2024-07-14 14:09:20.445054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.466 [2024-07-14 14:09:20.445085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.466 [2024-07-14 14:09:20.445102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.724 [2024-07-14 14:09:20.458408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.724 [2024-07-14 14:09:20.458438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.724 [2024-07-14 14:09:20.458455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.724 [2024-07-14 14:09:20.470198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.724 [2024-07-14 14:09:20.470227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.724 [2024-07-14 14:09:20.470258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.485606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.485633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.485663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.496468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.496498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.496515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.511838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.511866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.511905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.526458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.526489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.526505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.538061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.538089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.538119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.551214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.551256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.551271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.564132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.564159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.564192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.578080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.578109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.578126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.589509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.589539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.589556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.605449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.605478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.605502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.615837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.615866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.615890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.631127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.631156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.631172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.645887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.645916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.645932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.656452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.656479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.656493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.669991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.670019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.670051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.686977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.687006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.687037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.725 [2024-07-14 14:09:20.701341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.725 [2024-07-14 14:09:20.701371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.725 [2024-07-14 14:09:20.701387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.712609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.712639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.712656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.729486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.729526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.729544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.744329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.744361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.744378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.755480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.755507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.755538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.769606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.769652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.769668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.782279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.782308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.782324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.794760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.794806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.794822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.806960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.806990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.807007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.820100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.820129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.820146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.832669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.832695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.832727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.845182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.845211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.845227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.856851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.856901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.856919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.870082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.870112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.984 [2024-07-14 14:09:20.870129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.984 [2024-07-14 14:09:20.885063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.984 [2024-07-14 14:09:20.885093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.985 [2024-07-14 14:09:20.885110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.985 [2024-07-14 14:09:20.896863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.985 [2024-07-14 14:09:20.896910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.985 [2024-07-14 14:09:20.896943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.985 [2024-07-14 14:09:20.909996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.985 [2024-07-14 14:09:20.910025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.985 [2024-07-14 14:09:20.910056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.985 [2024-07-14 14:09:20.925061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.985 [2024-07-14 14:09:20.925092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.985 [2024-07-14 14:09:20.925108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.985 [2024-07-14 14:09:20.937343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.985 [2024-07-14 14:09:20.937371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.985 [2024-07-14 14:09:20.937402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.985 [2024-07-14 14:09:20.952601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:42.985 [2024-07-14 14:09:20.952630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.985 [2024-07-14 14:09:20.952669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:20.967405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:20.967436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:20.967453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:20.979506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:20.979537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:20.979554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:20.992582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:20.992609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:20.992641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.007240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.007270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.007287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.018489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.018518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.018548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.033608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.033637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.033668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.050150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.050194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.050210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.065108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.065139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.065156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.077723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.077754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.077771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.089478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.089506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.089537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.102417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.102443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.102474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.116710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.116739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.116756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.128588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.128615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.128644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.141284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.141314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.141331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.153046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.153075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.243 [2024-07-14 14:09:21.153092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.243 [2024-07-14 14:09:21.165893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.243 [2024-07-14 14:09:21.165920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.244 [2024-07-14 14:09:21.165950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.244 [2024-07-14 14:09:21.178780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.244 [2024-07-14 14:09:21.178810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.244 [2024-07-14 14:09:21.178834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.244 [2024-07-14 14:09:21.191996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.244 [2024-07-14 14:09:21.192025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.244 [2024-07-14 14:09:21.192042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.244 [2024-07-14 14:09:21.203237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.244 [2024-07-14 14:09:21.203266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.244 [2024-07-14 14:09:21.203282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.244 [2024-07-14 14:09:21.216330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.244 [2024-07-14 14:09:21.216359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.244 [2024-07-14 14:09:21.216376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.228673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.228707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.228726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.245160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.245208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.245226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.256725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.256757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.256776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.270902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.270947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.270964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.284844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.284883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.284904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.299016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.299055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.299072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.311509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.311541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.311559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.325661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.325694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.325712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.337778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.337810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.337828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.351117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.351144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.351159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.364092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.364120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.364151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.380157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.380203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.380221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.393306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.393338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.393356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.502 [2024-07-14 14:09:21.407601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.502 [2024-07-14 14:09:21.407633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.502 [2024-07-14 14:09:21.407652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.503 [2024-07-14 14:09:21.421349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.503 [2024-07-14 14:09:21.421382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.503 [2024-07-14 14:09:21.421400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.503 [2024-07-14 14:09:21.439374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.503 [2024-07-14 14:09:21.439409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.503 [2024-07-14 14:09:21.439428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.503 [2024-07-14 14:09:21.450599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.503 [2024-07-14 14:09:21.450631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.503 [2024-07-14 14:09:21.450650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.503 [2024-07-14 14:09:21.467004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.503 [2024-07-14 14:09:21.467032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.503 [2024-07-14 14:09:21.467064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.503 [2024-07-14 14:09:21.483521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.503 [2024-07-14 14:09:21.483553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.503 [2024-07-14 14:09:21.483571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.760 [2024-07-14 14:09:21.495626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.760 [2024-07-14 14:09:21.495660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-07-14 14:09:21.495678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.760 [2024-07-14 14:09:21.513152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.760 [2024-07-14 14:09:21.513180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-07-14 14:09:21.513196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.760 [2024-07-14 14:09:21.524205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.760 [2024-07-14 14:09:21.524238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-07-14 14:09:21.524256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.760 [2024-07-14 14:09:21.540826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.760 [2024-07-14 14:09:21.540859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.760 [2024-07-14 14:09:21.540892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.760 [2024-07-14 14:09:21.556734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.761 [2024-07-14 14:09:21.556767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.761 [2024-07-14 14:09:21.556785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.761 [2024-07-14 14:09:21.569233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.761 [2024-07-14 14:09:21.569266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.761 [2024-07-14 14:09:21.569284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.761 [2024-07-14 14:09:21.585423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.761 [2024-07-14 14:09:21.585455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.761 [2024-07-14 14:09:21.585473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.761 [2024-07-14 14:09:21.597338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.761 [2024-07-14 14:09:21.597370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.761 [2024-07-14 14:09:21.597388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.761 [2024-07-14 14:09:21.613375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.761 [2024-07-14 14:09:21.613407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.761 [2024-07-14 14:09:21.613425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.761 [2024-07-14 14:09:21.628848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.761 [2024-07-14 14:09:21.628887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.761 [2024-07-14 14:09:21.628907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.761 [2024-07-14 14:09:21.642039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.761 [2024-07-14 14:09:21.642069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.761 [2024-07-14 14:09:21.642085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.761 [2024-07-14 14:09:21.654288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.761 [2024-07-14 14:09:21.654320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.761 [2024-07-14 14:09:21.654338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.761 [2024-07-14 14:09:21.669242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c9d360) 00:33:43.761 [2024-07-14 14:09:21.669287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.761 [2024-07-14 14:09:21.669305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.761 00:33:43.761 Latency(us) 00:33:43.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.761 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:43.761 nvme0n1 : 2.00 18423.96 71.97 0.00 0.00 6939.21 3543.80 22039.51 00:33:43.761 =================================================================================================================== 00:33:43.761 Total : 18423.96 71.97 0.00 0.00 6939.21 3543.80 22039.51 00:33:43.761 0 00:33:43.761 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:43.761 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:43.761 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:43.761 | .driver_specific 00:33:43.761 | .nvme_error 00:33:43.761 | .status_code 00:33:43.761 | .command_transient_transport_error' 00:33:43.761 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:44.018 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:33:44.018 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1605064 00:33:44.018 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1605064 ']' 00:33:44.018 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1605064 00:33:44.018 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:44.018 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:44.018 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1605064 00:33:44.018 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:44.018 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:44.018 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1605064' 00:33:44.018 killing process with pid 1605064 00:33:44.018 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1605064 00:33:44.019 Received shutdown signal, test time was about 2.000000 seconds 00:33:44.019 00:33:44.019 Latency(us) 00:33:44.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.019 =================================================================================================================== 00:33:44.019 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:44.019 14:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1605064 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1605558 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1605558 /var/tmp/bperf.sock 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1605558 ']' 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:44.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:44.277 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:44.277 [2024-07-14 14:09:22.244002] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:44.277 [2024-07-14 14:09:22.244081] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605558 ] 00:33:44.277 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:44.277 Zero copy mechanism will not be used. 00:33:44.535 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.535 [2024-07-14 14:09:22.303729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.535 [2024-07-14 14:09:22.391203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.535 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:44.535 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:44.535 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:44.535 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:44.792 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:44.792 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.793 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.050 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.050 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.050 14:09:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.308 nvme0n1 00:33:45.308 14:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:45.308 14:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.308 14:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.308 14:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.308 14:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:45.308 14:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.587 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.587 Zero copy mechanism will not be used. 00:33:45.587 Running I/O for 2 seconds... 00:33:45.587 [2024-07-14 14:09:23.334326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.334395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.334418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.341486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.341532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.341550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.348442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.348474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.348508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.355407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.355438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.355471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.362208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.362239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.362272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.368968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.369001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.369019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.376577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.376768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.376790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.382836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.382867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.382911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.389706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.389737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.389778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.396670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.396701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.396734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.403674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.403704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.403737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.410609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.410640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.410673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.417596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.417627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.417659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.424478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.424523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.424541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.431470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.431516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.431532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.438609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.438640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.438673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.445462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.445494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.587 [2024-07-14 14:09:23.445527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.587 [2024-07-14 14:09:23.453003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.587 [2024-07-14 14:09:23.453048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.453066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.460331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.460367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.460387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.467597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.467631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.467651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.475112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.475142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.475175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.482553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.482588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.482607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.490020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.490050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.490083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.498268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.498303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.498322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.504846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.504890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.504911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.512176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.512224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.512249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.519557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.519591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.519610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.526985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.527015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.527047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.534289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.534324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.534343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.541493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.541535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.541554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.549087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.549117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.549149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.556674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.556709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.556728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.588 [2024-07-14 14:09:23.564066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.588 [2024-07-14 14:09:23.564096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.588 [2024-07-14 14:09:23.564129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.572451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.572491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.572511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.580459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.580501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.580521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.588353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.588389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.588408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.596267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.596302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.596321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.604046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.604092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.604110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.612156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.612205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.612225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.620963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.620995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.621013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.628560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.628595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.628615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.636364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.636399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.636418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.644229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.644277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.644296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.651372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.651496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.651523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.659393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.659428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.659448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.667280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.667315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.667334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.675449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.675485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.675504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.846 [2024-07-14 14:09:23.683490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.846 [2024-07-14 14:09:23.683525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.846 [2024-07-14 14:09:23.683544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.691622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.691657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.691676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.699219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.699254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.699272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.707147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.707195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.707215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.715898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.715946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.715973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.723782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.723817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.723837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.731661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.731696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.731716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.739622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.739657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.739676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.748240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.748276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.748296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.756347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.756382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.756401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.764824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.764860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.764890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.773439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.773476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.773494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.781715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.781751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.781770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.789278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.789319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.789339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.796702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.796737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.796756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.804079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.804126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.804153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.811439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.811475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.811494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.819024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.819055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.819072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.847 [2024-07-14 14:09:23.827340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:45.847 [2024-07-14 14:09:23.827375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.847 [2024-07-14 14:09:23.827394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.835866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.835926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.835945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.844127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.844159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.844176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.852733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.852769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.852788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.861020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.861051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.861068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.869052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.869084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.869102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.877118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.877150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.877167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.885056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.885087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.885104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.893147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.893179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.893197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.902078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.902124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.902141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.910726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.910760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.910779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.919761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.919796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.919815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.927535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.927619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.927665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.932456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.932491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.932510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.940710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.940745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.940764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.948313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.948348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.948367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.956541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.107 [2024-07-14 14:09:23.956606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.107 [2024-07-14 14:09:23.956646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.107 [2024-07-14 14:09:23.964114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:23.964146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:23.964163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:23.971807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:23.971842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:23.971861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:23.980619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:23.980655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:23.980674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:23.988247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:23.988283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:23.988301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:23.995849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:23.995892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:23.995913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:24.003319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:24.003355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:24.003374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:24.010914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:24.010961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:24.010977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:24.018569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:24.018604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:24.018623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:24.026512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:24.026547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:24.026567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:24.034434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:24.034469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:24.034488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:24.042236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:24.042271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:24.042291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:24.049900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:24.049947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:24.049964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:24.057808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:24.057843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:24.057869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:24.065874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:24.065930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:24.065947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:24.073795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:24.073830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:24.073849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.108 [2024-07-14 14:09:24.081631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.108 [2024-07-14 14:09:24.081666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.108 [2024-07-14 14:09:24.081686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.369 [2024-07-14 14:09:24.089278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.369 [2024-07-14 14:09:24.089315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-14 14:09:24.089334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.369 [2024-07-14 14:09:24.097190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.369 [2024-07-14 14:09:24.097235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-14 14:09:24.097255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.369 [2024-07-14 14:09:24.105059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.369 [2024-07-14 14:09:24.105091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-14 14:09:24.105109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.369 [2024-07-14 14:09:24.112992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.369 [2024-07-14 14:09:24.113022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-14 14:09:24.113053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.369 [2024-07-14 14:09:24.120954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.369 [2024-07-14 14:09:24.120995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-14 14:09:24.121013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.369 [2024-07-14 14:09:24.128506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.369 [2024-07-14 14:09:24.128547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-14 14:09:24.128567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.369 [2024-07-14 14:09:24.136389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.369 [2024-07-14 14:09:24.136424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-14 14:09:24.136443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.369 [2024-07-14 14:09:24.144352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.369 [2024-07-14 14:09:24.144387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.369 [2024-07-14 14:09:24.144406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.152196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.152231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.152251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.160181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.160228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.160248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.167958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.168005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.168021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.175933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.175962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.175994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.183838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.183873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.183903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.191782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.191817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.191837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.199659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.199694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.199713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.207573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.207608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.207627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.215319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.215354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.215373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.223146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.223176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.223192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.231078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.231108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.231140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.239004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.239050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.239067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.246941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.246973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.246990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.254511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.254546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.254566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.262441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.262476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.262501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.270297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.270331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.270351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.278185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.278214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.278246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.286047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.286077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.286110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.293895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.293939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.293955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.301844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.301888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.301925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.309500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.309534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.309554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.317373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.317408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.317427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.325192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.325227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.325246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.333063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.333099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.333132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.340825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.340860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.340887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.370 [2024-07-14 14:09:24.348748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.370 [2024-07-14 14:09:24.348783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.370 [2024-07-14 14:09:24.348802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.630 [2024-07-14 14:09:24.356581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.630 [2024-07-14 14:09:24.356616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-14 14:09:24.356636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.630 [2024-07-14 14:09:24.364206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.630 [2024-07-14 14:09:24.364254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-14 14:09:24.364274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.630 [2024-07-14 14:09:24.372041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.630 [2024-07-14 14:09:24.372073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-14 14:09:24.372107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.630 [2024-07-14 14:09:24.379945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.630 [2024-07-14 14:09:24.379977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-14 14:09:24.379994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.630 [2024-07-14 14:09:24.387798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.630 [2024-07-14 14:09:24.387834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-14 14:09:24.387853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.630 [2024-07-14 14:09:24.395617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.630 [2024-07-14 14:09:24.395662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-14 14:09:24.395681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.630 [2024-07-14 14:09:24.403442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.630 [2024-07-14 14:09:24.403478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-14 14:09:24.403497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.630 [2024-07-14 14:09:24.411414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.630 [2024-07-14 14:09:24.411451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-14 14:09:24.411470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.630 [2024-07-14 14:09:24.419386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.630 [2024-07-14 14:09:24.419422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.630 [2024-07-14 14:09:24.419441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.427219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.427255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.427275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.435075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.435107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.435124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.442968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.443013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.443030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.450670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.450701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.450735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.458526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.458563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.458582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.466409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.466445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.466470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.474225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.474262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.474281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.481981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.482013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.482031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.489945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.489991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.490008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.497915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.497963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.497980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.505549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.505585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.505604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.513551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.513587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.513606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.521453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.521489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.521508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.529298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.529334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.529352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.537116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.537158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.537194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.544990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.545022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.545040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.553079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.553110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.553128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.560940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.560971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.560987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.568637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.568672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.568691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.576474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.576509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.576529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.584327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.584362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.584381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.592243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.592278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.592297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.599953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.599985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.600007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.631 [2024-07-14 14:09:24.607811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.631 [2024-07-14 14:09:24.607847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.631 [2024-07-14 14:09:24.607866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.615804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.615840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.615860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.623646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.623681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.623700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.631513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.631549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.631568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.639351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.639387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.639405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.647104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.647150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.647167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.655046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.655078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.655096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.662993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.663024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.663041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.671093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.671145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.671162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.679031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.679063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.679080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.686966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.687013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.687030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.694723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.694758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.694777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.702698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.702733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.702752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.710520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.710555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.710574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.718309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.718344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.718363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.726031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.726076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.726093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.733965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.733997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.734013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.741843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.741886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.741908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.749836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.749871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.749899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.757793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.891 [2024-07-14 14:09:24.757828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.891 [2024-07-14 14:09:24.757847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.891 [2024-07-14 14:09:24.765670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.765705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.765724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.773478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.773513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.773531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.781238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.781289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.781308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.789106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.789139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.789172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.797328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.797363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.797382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.806206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.806241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.806266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.813746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.813781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.813800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.821770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.821805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.821824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.829801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.829836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.829855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.838289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.838324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.838343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.846649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.846684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.846702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.855030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.855077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.855094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.862799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.862834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.862853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.892 [2024-07-14 14:09:24.870290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:46.892 [2024-07-14 14:09:24.870325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.892 [2024-07-14 14:09:24.870344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.151 [2024-07-14 14:09:24.877818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.151 [2024-07-14 14:09:24.877859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.151 [2024-07-14 14:09:24.877889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.151 [2024-07-14 14:09:24.885321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.151 [2024-07-14 14:09:24.885356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.151 [2024-07-14 14:09:24.885375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.151 [2024-07-14 14:09:24.893236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.151 [2024-07-14 14:09:24.893285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.893305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.901194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.901241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.901261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.909258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.909292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.909311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.917535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.917571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.917590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.925971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.926001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.926032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.934532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.934568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.934587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.942963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.942994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.943010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.951525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.951560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.951579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.959226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.959261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.959280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.966699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.966734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.966752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.974483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.974518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.974537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.982426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.982461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.982480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.990529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.990563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.990582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:24.998886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:24.998933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:24.998950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:25.006948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:25.006979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:25.006996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:25.014613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:25.014647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:25.014673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:25.022330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:25.022365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:25.022384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:25.031021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:25.031052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:25.031085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:25.039604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:25.039639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:25.039657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:25.047807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:25.047843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:25.047862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:25.056455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:25.056490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:25.056509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:25.064752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.152 [2024-07-14 14:09:25.064787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.152 [2024-07-14 14:09:25.064807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.152 [2024-07-14 14:09:25.073031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.153 [2024-07-14 14:09:25.073063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.153 [2024-07-14 14:09:25.073100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.153 [2024-07-14 14:09:25.080466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.153 [2024-07-14 14:09:25.080501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.153 [2024-07-14 14:09:25.080520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.153 [2024-07-14 14:09:25.088028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.153 [2024-07-14 14:09:25.088064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.153 [2024-07-14 14:09:25.088098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.153 [2024-07-14 14:09:25.095662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.153 [2024-07-14 14:09:25.095696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.153 [2024-07-14 14:09:25.095715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.153 [2024-07-14 14:09:25.103178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.153 [2024-07-14 14:09:25.103214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.153 [2024-07-14 14:09:25.103233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.153 [2024-07-14 14:09:25.111094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.153 [2024-07-14 14:09:25.111126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.153 [2024-07-14 14:09:25.111143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.153 [2024-07-14 14:09:25.119325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.153 [2024-07-14 14:09:25.119361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.153 [2024-07-14 14:09:25.119380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.153 [2024-07-14 14:09:25.127139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.153 [2024-07-14 14:09:25.127188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.153 [2024-07-14 14:09:25.127207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.411 [2024-07-14 14:09:25.134873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.411 [2024-07-14 14:09:25.134930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.411 [2024-07-14 14:09:25.134948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.143133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.143179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.143196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.151090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.151136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.151153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.158940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.158971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.158987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.166860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.166903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.166936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.175017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.175049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.175067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.182958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.182990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.183022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.190442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.190477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.190496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.198361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.198396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.198415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.206231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.206266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.206285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.214053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.214084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.214101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.221944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.221980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.221999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.229807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.229841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.229860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.237553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.237588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.237607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.245414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.245449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.245468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.253416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.253450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.253470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.261244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.261278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.261297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.269107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.269138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.269155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.276834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.276869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.276899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.284695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.284730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.284749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.292676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.292711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.292730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.300521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.300557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.300576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.308333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.308368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.412 [2024-07-14 14:09:25.308386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:47.412 [2024-07-14 14:09:25.316196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.412 [2024-07-14 14:09:25.316242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.413 [2024-07-14 14:09:25.316262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:47.413 [2024-07-14 14:09:25.324100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.413 [2024-07-14 14:09:25.324131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.413 [2024-07-14 14:09:25.324148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:47.413 [2024-07-14 14:09:25.331592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2395d50) 00:33:47.413 [2024-07-14 14:09:25.331627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.413 [2024-07-14 14:09:25.331646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.413 00:33:47.413 Latency(us) 00:33:47.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.413 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:47.413 nvme0n1 : 2.00 3969.47 496.18 0.00 0.00 4025.39 1140.81 9369.22 00:33:47.413 =================================================================================================================== 00:33:47.413 Total : 3969.47 496.18 0.00 0.00 4025.39 1140.81 9369.22 00:33:47.413 0 00:33:47.413 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:47.413 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:47.413 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:47.413 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:47.413 | .driver_specific 00:33:47.413 | .nvme_error 00:33:47.413 | .status_code 00:33:47.413 | .command_transient_transport_error' 00:33:47.671 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 256 > 0 )) 00:33:47.671 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1605558 00:33:47.671 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1605558 ']' 00:33:47.671 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1605558 00:33:47.671 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:47.671 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:47.672 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1605558 00:33:47.672 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:47.672 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:47.672 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1605558' 00:33:47.672 killing process with pid 1605558 00:33:47.672 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1605558 00:33:47.672 Received shutdown signal, test time was about 2.000000 seconds 00:33:47.672 00:33:47.672 Latency(us) 00:33:47.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.672 =================================================================================================================== 00:33:47.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.672 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1605558 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1606001 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1606001 /var/tmp/bperf.sock 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1606001 ']' 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:47.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:47.931 14:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:47.931 [2024-07-14 14:09:25.895504] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:47.931 [2024-07-14 14:09:25.895579] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606001 ] 00:33:48.190 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.190 [2024-07-14 14:09:25.958183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.190 [2024-07-14 14:09:26.045692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.190 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:48.190 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:48.190 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:48.190 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:48.448 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:48.448 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.448 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.448 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.448 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.448 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:49.017 nvme0n1 00:33:49.017 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:49.017 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.017 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.017 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.017 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:49.017 14:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:49.017 Running I/O for 2 seconds... 00:33:49.017 [2024-07-14 14:09:26.835517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ee5c8 00:33:49.017 [2024-07-14 14:09:26.836498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.017 [2024-07-14 14:09:26.836551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:49.017 [2024-07-14 14:09:26.848323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fbcf0 00:33:49.017 [2024-07-14 14:09:26.849422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.017 [2024-07-14 14:09:26.849469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:49.017 [2024-07-14 14:09:26.860221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fb8b8 00:33:49.017 [2024-07-14 14:09:26.861440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.017 [2024-07-14 14:09:26.861487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:49.017 [2024-07-14 14:09:26.872747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e4140 00:33:49.017 [2024-07-14 14:09:26.874114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.017 [2024-07-14 14:09:26.874160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:49.017 [2024-07-14 14:09:26.885276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e12d8 00:33:49.017 [2024-07-14 14:09:26.886747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.017 [2024-07-14 14:09:26.886791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:49.017 [2024-07-14 14:09:26.897630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f6020 00:33:49.017 [2024-07-14 14:09:26.899352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.018 [2024-07-14 14:09:26.899395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:49.018 [2024-07-14 14:09:26.909712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fd208 00:33:49.018 [2024-07-14 14:09:26.911360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.018 [2024-07-14 14:09:26.911404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:49.018 [2024-07-14 14:09:26.919628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190df988 00:33:49.018 [2024-07-14 14:09:26.920408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.018 [2024-07-14 14:09:26.920438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:49.018 [2024-07-14 14:09:26.930865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fc998 00:33:49.018 [2024-07-14 14:09:26.932032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.018 [2024-07-14 14:09:26.932063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:49.018 [2024-07-14 14:09:26.942674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f35f0 00:33:49.018 [2024-07-14 14:09:26.943579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.018 [2024-07-14 14:09:26.943628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:49.018 [2024-07-14 14:09:26.954365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f31b8 00:33:49.018 [2024-07-14 14:09:26.955399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.018 [2024-07-14 14:09:26.955442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:49.018 [2024-07-14 14:09:26.967510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e0ea0 00:33:49.018 [2024-07-14 14:09:26.968709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.018 [2024-07-14 14:09:26.968753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:49.018 [2024-07-14 14:09:26.979728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f0ff8 00:33:49.018 [2024-07-14 14:09:26.981096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.018 [2024-07-14 14:09:26.981152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:49.018 [2024-07-14 14:09:26.990936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ec408 00:33:49.018 [2024-07-14 14:09:26.992272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.018 [2024-07-14 14:09:26.992303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.003371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190eaef0 00:33:49.278 [2024-07-14 14:09:27.004918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.004971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.015639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f7970 00:33:49.278 [2024-07-14 14:09:27.017302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.017347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.028029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f2d80 00:33:49.278 [2024-07-14 14:09:27.029822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.029869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.036338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fb048 00:33:49.278 [2024-07-14 14:09:27.037090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.037141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.047966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ddc00 00:33:49.278 [2024-07-14 14:09:27.048839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.048896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.061120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f20d8 00:33:49.278 [2024-07-14 14:09:27.062199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.062235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.073348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f9f68 00:33:49.278 [2024-07-14 14:09:27.074541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.074591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.084952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e12d8 00:33:49.278 [2024-07-14 14:09:27.086315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.086360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.097381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f81e0 00:33:49.278 [2024-07-14 14:09:27.098887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.098936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.109444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fb480 00:33:49.278 [2024-07-14 14:09:27.110903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.110948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.119280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f6cc8 00:33:49.278 [2024-07-14 14:09:27.119913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.119948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.131576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ddc00 00:33:49.278 [2024-07-14 14:09:27.132365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.132401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.143665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f6458 00:33:49.278 [2024-07-14 14:09:27.144686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.144735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.155821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ed0b0 00:33:49.278 [2024-07-14 14:09:27.156763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.156809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.166630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e6300 00:33:49.278 [2024-07-14 14:09:27.167919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.167953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.178229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ecc78 00:33:49.278 [2024-07-14 14:09:27.179144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.179197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.189466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f6458 00:33:49.278 [2024-07-14 14:09:27.190394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.190446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.202742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e4de8 00:33:49.278 [2024-07-14 14:09:27.203783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.203836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.214768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190edd58 00:33:49.278 [2024-07-14 14:09:27.215975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.216026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.227131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190feb58 00:33:49.278 [2024-07-14 14:09:27.228520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.228573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.236019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f9b30 00:33:49.278 [2024-07-14 14:09:27.236746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.236797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:49.278 [2024-07-14 14:09:27.251417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f3e60 00:33:49.278 [2024-07-14 14:09:27.252911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.278 [2024-07-14 14:09:27.252950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.263325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f0788 00:33:49.539 [2024-07-14 14:09:27.265013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.265064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.275424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e23b8 00:33:49.539 [2024-07-14 14:09:27.277039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.277090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.285321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e6fa8 00:33:49.539 [2024-07-14 14:09:27.286141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.286193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.297419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f1430 00:33:49.539 [2024-07-14 14:09:27.298348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.298386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.311004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e7818 00:33:49.539 [2024-07-14 14:09:27.312793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.312842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.320394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f5378 00:33:49.539 [2024-07-14 14:09:27.321534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.321584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.333286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fc998 00:33:49.539 [2024-07-14 14:09:27.334377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.334408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.346971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e7818 00:33:49.539 [2024-07-14 14:09:27.348995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.349035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.355427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f8e88 00:33:49.539 [2024-07-14 14:09:27.356312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.356363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.367254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ed920 00:33:49.539 [2024-07-14 14:09:27.368252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.368284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.379550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f4298 00:33:49.539 [2024-07-14 14:09:27.380671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.380721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:49.539 [2024-07-14 14:09:27.391804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e0630 00:33:49.539 [2024-07-14 14:09:27.393109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.539 [2024-07-14 14:09:27.393161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:49.540 [2024-07-14 14:09:27.403471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f0ff8 00:33:49.540 [2024-07-14 14:09:27.405292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.540 [2024-07-14 14:09:27.405322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:49.540 [2024-07-14 14:09:27.414338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e99d8 00:33:49.540 [2024-07-14 14:09:27.415174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.540 [2024-07-14 14:09:27.415225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:49.540 [2024-07-14 14:09:27.427508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fcdd0 00:33:49.540 [2024-07-14 14:09:27.428970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.540 [2024-07-14 14:09:27.429010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:49.540 [2024-07-14 14:09:27.436950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fc560 00:33:49.540 [2024-07-14 14:09:27.437757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.540 [2024-07-14 14:09:27.437807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:49.540 [2024-07-14 14:09:27.449674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f8618 00:33:49.540 [2024-07-14 14:09:27.450362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.540 [2024-07-14 14:09:27.450399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:49.540 [2024-07-14 14:09:27.462528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190eee38 00:33:49.540 [2024-07-14 14:09:27.463914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.540 [2024-07-14 14:09:27.463959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:49.540 [2024-07-14 14:09:27.474855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fdeb0 00:33:49.540 [2024-07-14 14:09:27.476427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.540 [2024-07-14 14:09:27.476479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:49.540 [2024-07-14 14:09:27.484271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ecc78 00:33:49.540 [2024-07-14 14:09:27.485179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.540 [2024-07-14 14:09:27.485231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:49.540 [2024-07-14 14:09:27.497134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f7538 00:33:49.540 [2024-07-14 14:09:27.497926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.540 [2024-07-14 14:09:27.497962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:49.540 [2024-07-14 14:09:27.508595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f0350 00:33:49.540 [2024-07-14 14:09:27.509844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.540 [2024-07-14 14:09:27.509873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:49.540 [2024-07-14 14:09:27.520287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e6300 00:33:49.799 [2024-07-14 14:09:27.521318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.799 [2024-07-14 14:09:27.521352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:49.799 [2024-07-14 14:09:27.535152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f4298 00:33:49.799 [2024-07-14 14:09:27.536997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.799 [2024-07-14 14:09:27.537049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:49.799 [2024-07-14 14:09:27.543511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e3d08 00:33:49.799 [2024-07-14 14:09:27.544288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.799 [2024-07-14 14:09:27.544323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:49.799 [2024-07-14 14:09:27.554761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fda78 00:33:49.799 [2024-07-14 14:09:27.555568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.799 [2024-07-14 14:09:27.555619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:49.799 [2024-07-14 14:09:27.567125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f5378 00:33:49.799 [2024-07-14 14:09:27.568041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.799 [2024-07-14 14:09:27.568092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:49.799 [2024-07-14 14:09:27.579149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fb8b8 00:33:49.799 [2024-07-14 14:09:27.580054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.799 [2024-07-14 14:09:27.580106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:49.799 [2024-07-14 14:09:27.592909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fcdd0 00:33:49.799 [2024-07-14 14:09:27.594455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.799 [2024-07-14 14:09:27.594513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:49.799 [2024-07-14 14:09:27.602496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ec840 00:33:49.799 [2024-07-14 14:09:27.603444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.799 [2024-07-14 14:09:27.603494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:49.799 [2024-07-14 14:09:27.617798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f5be8 00:33:49.799 [2024-07-14 14:09:27.619677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.799 [2024-07-14 14:09:27.619729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:49.799 [2024-07-14 14:09:27.627239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e5220 00:33:49.799 [2024-07-14 14:09:27.628450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.799 [2024-07-14 14:09:27.628502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:49.799 [2024-07-14 14:09:27.639043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e3498 00:33:49.799 [2024-07-14 14:09:27.640804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.799 [2024-07-14 14:09:27.640833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:49.799 [2024-07-14 14:09:27.650022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fef90 00:33:49.799 [2024-07-14 14:09:27.650747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.800 [2024-07-14 14:09:27.650798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:49.800 [2024-07-14 14:09:27.662064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f7da8 00:33:49.800 [2024-07-14 14:09:27.662970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.800 [2024-07-14 14:09:27.663006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:49.800 [2024-07-14 14:09:27.675399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f96f8 00:33:49.800 [2024-07-14 14:09:27.676863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.800 [2024-07-14 14:09:27.676922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:49.800 [2024-07-14 14:09:27.687729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f2d80 00:33:49.800 [2024-07-14 14:09:27.689420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.800 [2024-07-14 14:09:27.689465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:49.800 [2024-07-14 14:09:27.697287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e0ea0 00:33:49.800 [2024-07-14 14:09:27.698367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.800 [2024-07-14 14:09:27.698422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:49.800 [2024-07-14 14:09:27.709604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ecc78 00:33:49.800 [2024-07-14 14:09:27.710773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.800 [2024-07-14 14:09:27.710816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:49.800 [2024-07-14 14:09:27.722760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e5ec8 00:33:49.800 [2024-07-14 14:09:27.724228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.800 [2024-07-14 14:09:27.724262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:49.800 [2024-07-14 14:09:27.734665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ea680 00:33:49.800 [2024-07-14 14:09:27.735626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.800 [2024-07-14 14:09:27.735664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:49.800 [2024-07-14 14:09:27.747549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fa7d8 00:33:49.800 [2024-07-14 14:09:27.748382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.800 [2024-07-14 14:09:27.748424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:49.800 [2024-07-14 14:09:27.761402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e5ec8 00:33:49.800 [2024-07-14 14:09:27.763053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.800 [2024-07-14 14:09:27.763102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:49.800 [2024-07-14 14:09:27.771626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f2510 00:33:49.800 [2024-07-14 14:09:27.772558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:49.800 [2024-07-14 14:09:27.772594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.785553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f4f40 00:33:50.060 [2024-07-14 14:09:27.786331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.786374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.798894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e5ec8 00:33:50.060 [2024-07-14 14:09:27.799822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.799865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.811608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ea248 00:33:50.060 [2024-07-14 14:09:27.812894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.812937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.826121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f9b30 00:33:50.060 [2024-07-14 14:09:27.828113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.828152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.836423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ee190 00:33:50.060 [2024-07-14 14:09:27.837667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.837711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.850369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f7da8 00:33:50.060 [2024-07-14 14:09:27.851442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.851491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.864202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fd640 00:33:50.060 [2024-07-14 14:09:27.866136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.866199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.874499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ebfd0 00:33:50.060 [2024-07-14 14:09:27.875699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.875740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.887947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ebb98 00:33:50.060 [2024-07-14 14:09:27.889345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.889380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.901325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e0ea0 00:33:50.060 [2024-07-14 14:09:27.902885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.902934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.914347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fd208 00:33:50.060 [2024-07-14 14:09:27.915396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.915430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.926342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190eaab8 00:33:50.060 [2024-07-14 14:09:27.928354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.928388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.937161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f7970 00:33:50.060 [2024-07-14 14:09:27.938051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.938093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.950639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f6cc8 00:33:50.060 [2024-07-14 14:09:27.951663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.951695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.963963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fbcf0 00:33:50.060 [2024-07-14 14:09:27.965185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.965230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.978174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fcdd0 00:33:50.060 [2024-07-14 14:09:27.979565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.979599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:27.991327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e12d8 00:33:50.060 [2024-07-14 14:09:27.992851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:27.992896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:28.001708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f2d80 00:33:50.060 [2024-07-14 14:09:28.002557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.060 [2024-07-14 14:09:28.002600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:50.060 [2024-07-14 14:09:28.014710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190df988 00:33:50.060 [2024-07-14 14:09:28.015426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.061 [2024-07-14 14:09:28.015460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:50.061 [2024-07-14 14:09:28.027964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ef270 00:33:50.061 [2024-07-14 14:09:28.028834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.061 [2024-07-14 14:09:28.028872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:50.061 [2024-07-14 14:09:28.041281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f31b8 00:33:50.321 [2024-07-14 14:09:28.042317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.042351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.055874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fd208 00:33:50.321 [2024-07-14 14:09:28.057952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.057995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.064940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190efae0 00:33:50.321 [2024-07-14 14:09:28.065779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.065812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.079425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e88f8 00:33:50.321 [2024-07-14 14:09:28.081149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.081179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.089814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e4de8 00:33:50.321 [2024-07-14 14:09:28.090664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.090702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.103087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f6cc8 00:33:50.321 [2024-07-14 14:09:28.104103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.104147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.117306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e5a90 00:33:50.321 [2024-07-14 14:09:28.118500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.118536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.130485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f0ff8 00:33:50.321 [2024-07-14 14:09:28.131862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.131903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.142543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f46d0 00:33:50.321 [2024-07-14 14:09:28.143905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.143939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.155961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f0bc0 00:33:50.321 [2024-07-14 14:09:28.157502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.157536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.166161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190de8a8 00:33:50.321 [2024-07-14 14:09:28.167012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.167043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.179356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190edd58 00:33:50.321 [2024-07-14 14:09:28.180321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.180354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.192669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fcdd0 00:33:50.321 [2024-07-14 14:09:28.193832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.193865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.205971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f0ff8 00:33:50.321 [2024-07-14 14:09:28.207304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.207341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.218447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ed0b0 00:33:50.321 [2024-07-14 14:09:28.220409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.220444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.229363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e27f0 00:33:50.321 [2024-07-14 14:09:28.230165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.230209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.245032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ed920 00:33:50.321 [2024-07-14 14:09:28.246533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.246567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.255483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fe2e8 00:33:50.321 [2024-07-14 14:09:28.256275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.256311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.268737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ebb98 00:33:50.321 [2024-07-14 14:09:28.269731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.269767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.281975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e6fa8 00:33:50.321 [2024-07-14 14:09:28.283131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.283175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:50.321 [2024-07-14 14:09:28.295236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f0bc0 00:33:50.321 [2024-07-14 14:09:28.296585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.321 [2024-07-14 14:09:28.296618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:50.581 [2024-07-14 14:09:28.308574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ee190 00:33:50.581 [2024-07-14 14:09:28.310090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.581 [2024-07-14 14:09:28.310136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:50.581 [2024-07-14 14:09:28.321889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ebb98 00:33:50.581 [2024-07-14 14:09:28.323575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.581 [2024-07-14 14:09:28.323610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:50.581 [2024-07-14 14:09:28.335138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f96f8 00:33:50.581 [2024-07-14 14:09:28.337077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.581 [2024-07-14 14:09:28.337107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.348146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e95a0 00:33:50.582 [2024-07-14 14:09:28.350038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.350081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.360540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e4578 00:33:50.582 [2024-07-14 14:09:28.362365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.362408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.373725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f2510 00:33:50.582 [2024-07-14 14:09:28.375763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.375801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.382750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f4298 00:33:50.582 [2024-07-14 14:09:28.383578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.383612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.396091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e1b48 00:33:50.582 [2024-07-14 14:09:28.397113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.397156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.408085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f0bc0 00:33:50.582 [2024-07-14 14:09:28.409107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.409151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.421490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ebb98 00:33:50.582 [2024-07-14 14:09:28.422620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.422653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.434719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190eea00 00:33:50.582 [2024-07-14 14:09:28.436078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.436122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.447701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fe2e8 00:33:50.582 [2024-07-14 14:09:28.448532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.448566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.461074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e01f8 00:33:50.582 [2024-07-14 14:09:28.462119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.462165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.472732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f0bc0 00:33:50.582 [2024-07-14 14:09:28.474114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.474144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.485368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e49b0 00:33:50.582 [2024-07-14 14:09:28.486355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.486388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.497372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fbcf0 00:33:50.582 [2024-07-14 14:09:28.498370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.498403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.510245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e0ea0 00:33:50.582 [2024-07-14 14:09:28.511219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.511252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.525139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fc560 00:33:50.582 [2024-07-14 14:09:28.526679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.526712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.537173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f0350 00:33:50.582 [2024-07-14 14:09:28.538680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.538713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.549995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e7c50 00:33:50.582 [2024-07-14 14:09:28.551488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.582 [2024-07-14 14:09:28.551523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:50.582 [2024-07-14 14:09:28.563201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f35f0 00:33:50.842 [2024-07-14 14:09:28.564711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.564745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.573778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190df988 00:33:50.842 [2024-07-14 14:09:28.574600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.574633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.586997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fda78 00:33:50.842 [2024-07-14 14:09:28.588156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.588198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.600342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e6fa8 00:33:50.842 [2024-07-14 14:09:28.601664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.601696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.613490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fc128 00:33:50.842 [2024-07-14 14:09:28.615012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.615041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.626777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e5220 00:33:50.842 [2024-07-14 14:09:28.628467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.628501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.640099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e0ea0 00:33:50.842 [2024-07-14 14:09:28.642010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.642054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.649314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e73e0 00:33:50.842 [2024-07-14 14:09:28.650155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.650199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.664777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fc128 00:33:50.842 [2024-07-14 14:09:28.666138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.666181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.678166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ee190 00:33:50.842 [2024-07-14 14:09:28.679702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.679737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.688829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e84c0 00:33:50.842 [2024-07-14 14:09:28.689520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.689566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.703380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fda78 00:33:50.842 [2024-07-14 14:09:28.705094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.705145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.716635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f6890 00:33:50.842 [2024-07-14 14:09:28.718506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.718541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.729835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190ef270 00:33:50.842 [2024-07-14 14:09:28.731938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.731973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.739062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e5a90 00:33:50.842 [2024-07-14 14:09:28.740118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.740168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.752359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fef90 00:33:50.842 [2024-07-14 14:09:28.753549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.753584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.766196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190de038 00:33:50.842 [2024-07-14 14:09:28.767244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.767285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.780047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190e95a0 00:33:50.842 [2024-07-14 14:09:28.781945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.781979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.790227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190fc128 00:33:50.842 [2024-07-14 14:09:28.791424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.791465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.804046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f5378 00:33:50.842 [2024-07-14 14:09:28.805086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.842 [2024-07-14 14:09:28.805123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:50.842 [2024-07-14 14:09:28.816054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f7970 00:33:50.843 [2024-07-14 14:09:28.817969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:50.843 [2024-07-14 14:09:28.817999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:51.101 [2024-07-14 14:09:28.829989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348bc0) with pdu=0x2000190f7100 00:33:51.101 [2024-07-14 14:09:28.831514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:51.101 [2024-07-14 14:09:28.831547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:51.101 00:33:51.101 Latency(us) 00:33:51.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.101 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:51.101 nvme0n1 : 2.01 20754.59 81.07 0.00 0.00 6156.99 2706.39 17282.09 00:33:51.101 =================================================================================================================== 00:33:51.101 Total : 20754.59 81.07 0.00 0.00 6156.99 2706.39 17282.09 00:33:51.101 0 00:33:51.101 14:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:51.101 14:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:51.101 14:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:51.101 14:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:51.101 | .driver_specific 00:33:51.101 | .nvme_error 00:33:51.101 | .status_code 00:33:51.101 | .command_transient_transport_error' 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1606001 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1606001 ']' 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1606001 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1606001 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1606001' 00:33:51.358 killing process with pid 1606001 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1606001 00:33:51.358 Received shutdown signal, test time was about 2.000000 seconds 00:33:51.358 00:33:51.358 Latency(us) 00:33:51.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.358 =================================================================================================================== 00:33:51.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:51.358 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1606001 00:33:51.615 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:51.615 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:51.615 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:51.615 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:51.615 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:51.615 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1606404 00:33:51.615 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:51.616 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1606404 /var/tmp/bperf.sock 00:33:51.616 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1606404 ']' 00:33:51.616 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:51.616 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:51.616 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:51.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:51.616 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:51.616 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:51.616 [2024-07-14 14:09:29.388646] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:51.616 [2024-07-14 14:09:29.388720] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606404 ] 00:33:51.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:51.616 Zero copy mechanism will not be used. 00:33:51.616 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.616 [2024-07-14 14:09:29.446712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.616 [2024-07-14 14:09:29.532274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.872 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:51.872 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:51.872 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:51.872 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:52.130 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:52.130 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.130 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:52.130 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.130 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:52.130 14:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:52.388 nvme0n1 00:33:52.388 14:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:52.388 14:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.388 14:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:52.388 14:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.388 14:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:52.388 14:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:52.647 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:52.647 Zero copy mechanism will not be used. 00:33:52.647 Running I/O for 2 seconds... 00:33:52.647 [2024-07-14 14:09:30.478617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.647 [2024-07-14 14:09:30.479024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.647 [2024-07-14 14:09:30.479062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.647 [2024-07-14 14:09:30.484537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.647 [2024-07-14 14:09:30.484856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.647 [2024-07-14 14:09:30.484897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.647 [2024-07-14 14:09:30.490257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.647 [2024-07-14 14:09:30.490574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.647 [2024-07-14 14:09:30.490607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.647 [2024-07-14 14:09:30.495896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.647 [2024-07-14 14:09:30.496223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.647 [2024-07-14 14:09:30.496255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.647 [2024-07-14 14:09:30.501454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.647 [2024-07-14 14:09:30.501769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.647 [2024-07-14 14:09:30.501799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.647 [2024-07-14 14:09:30.507010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.647 [2024-07-14 14:09:30.507326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.647 [2024-07-14 14:09:30.507358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.647 [2024-07-14 14:09:30.512591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.647 [2024-07-14 14:09:30.512929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.647 [2024-07-14 14:09:30.512958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.647 [2024-07-14 14:09:30.518247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.647 [2024-07-14 14:09:30.518564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.518596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.523865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.524177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.524222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.529543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.529857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.529899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.535092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.535493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.535525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.540732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.541086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.541115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.546357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.546675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.546706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.551968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.552279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.552311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.557537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.557853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.557896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.563109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.563452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.563484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.568576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.568899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.568953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.574165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.574481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.574512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.579922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.580000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.580027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.586102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.586429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.586461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.591851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.592181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.592214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.598066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.598417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.598450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.603783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.604107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.604136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.610282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.610632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.610663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.616727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.617067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.617112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.622633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.622998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.648 [2024-07-14 14:09:30.623026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.648 [2024-07-14 14:09:30.628976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.648 [2024-07-14 14:09:30.629328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.629360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.635621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.635957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.635986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.641994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.642308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.642339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.648353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.648669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.648701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.654851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.655182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.655213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.661303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.661665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.661697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.667597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.667923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.667957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.673837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.674156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.674201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.679816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.680134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.680179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.685326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.685641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.685672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.690796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.691111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.691140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.696275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.696625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.696656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.701790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.702107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.702135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.707842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.708164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.708208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.713929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.714238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.714269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.719473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.719796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.908 [2024-07-14 14:09:30.719827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.908 [2024-07-14 14:09:30.725001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.908 [2024-07-14 14:09:30.725352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.725384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.730945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.731276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.731307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.737289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.737611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.737645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.742830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.743152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.743188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.748337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.748652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.748683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.754447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.754780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.754812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.760344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.760660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.760691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.765842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.766158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.766202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.771367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.771684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.771715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.776906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.777239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.777270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.782566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.782902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.782945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.788162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.788495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.788527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.793692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.794019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.794047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.799300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.799657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.799687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.804960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.805304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.805336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.810598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.810964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.810993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.816154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.816478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.816516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.821705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.822033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.822061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.827205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.827532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.827563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.832718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.833044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.833072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.838362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.838679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.838710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.843932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.844240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.844271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.849541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.849900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.849945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.855144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.855487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.855518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.860719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.861042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.861071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.866368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.866690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.866721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.871951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.872263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.872294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.877533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.877847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.877885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.883056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.909 [2024-07-14 14:09:30.883375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.909 [2024-07-14 14:09:30.883407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:52.909 [2024-07-14 14:09:30.888561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:52.910 [2024-07-14 14:09:30.888884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.910 [2024-07-14 14:09:30.888930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.169 [2024-07-14 14:09:30.894108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.169 [2024-07-14 14:09:30.894453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.169 [2024-07-14 14:09:30.894484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.169 [2024-07-14 14:09:30.899735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.169 [2024-07-14 14:09:30.900089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.169 [2024-07-14 14:09:30.900117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.169 [2024-07-14 14:09:30.905486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.169 [2024-07-14 14:09:30.905835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.169 [2024-07-14 14:09:30.905866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.169 [2024-07-14 14:09:30.911073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.169 [2024-07-14 14:09:30.911394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.169 [2024-07-14 14:09:30.911425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.169 [2024-07-14 14:09:30.917592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.169 [2024-07-14 14:09:30.917933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.169 [2024-07-14 14:09:30.917962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.169 [2024-07-14 14:09:30.923156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.169 [2024-07-14 14:09:30.923486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.169 [2024-07-14 14:09:30.923517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.169 [2024-07-14 14:09:30.928658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.169 [2024-07-14 14:09:30.929011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.169 [2024-07-14 14:09:30.929039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.169 [2024-07-14 14:09:30.934114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.169 [2024-07-14 14:09:30.934439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.169 [2024-07-14 14:09:30.934470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.169 [2024-07-14 14:09:30.940591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:30.940932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:30.940960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:30.946427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:30.946725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:30.946756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:30.951792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:30.952109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:30.952137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:30.957407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:30.957723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:30.957754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:30.963309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:30.963626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:30.963667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:30.969273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:30.969591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:30.969622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:30.976750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:30.976905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:30.976950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:30.983787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:30.984117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:30.984146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:30.991114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:30.991443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:30.991476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:30.997659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:30.997983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:30.998012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.004193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.004510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.004541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.010689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.011019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.011048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.017215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.017575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.017606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.023635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.023955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.023984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.030208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.030542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.030574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.037398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.037738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.037769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.044592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.044920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.044966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.051211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.051522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.051554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.057567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.057838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.057866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.064099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.064433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.064464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.071125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.071461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.071492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.078304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.078632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.078672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.085042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.085356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.085389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.092062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.092397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.092429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.098509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.098821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.098853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.105992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.106314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.106347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.113729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.114059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.114088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.119805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.120105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.120133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.125648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.170 [2024-07-14 14:09:31.125978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.170 [2024-07-14 14:09:31.126007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.170 [2024-07-14 14:09:31.131212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.171 [2024-07-14 14:09:31.131513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.171 [2024-07-14 14:09:31.131555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.171 [2024-07-14 14:09:31.136517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.171 [2024-07-14 14:09:31.136810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.171 [2024-07-14 14:09:31.136837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.171 [2024-07-14 14:09:31.141594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.171 [2024-07-14 14:09:31.141897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.171 [2024-07-14 14:09:31.141925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.171 [2024-07-14 14:09:31.146724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.171 [2024-07-14 14:09:31.147022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.171 [2024-07-14 14:09:31.147050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.151741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.152045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.430 [2024-07-14 14:09:31.152074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.156727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.157059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.430 [2024-07-14 14:09:31.157087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.161769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.162068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.430 [2024-07-14 14:09:31.162097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.166863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.167167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.430 [2024-07-14 14:09:31.167210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.171914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.172228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.430 [2024-07-14 14:09:31.172256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.176921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.177216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.430 [2024-07-14 14:09:31.177243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.182064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.182368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.430 [2024-07-14 14:09:31.182395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.187919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.188240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.430 [2024-07-14 14:09:31.188283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.194378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.194580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.430 [2024-07-14 14:09:31.194608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.201222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.201358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.430 [2024-07-14 14:09:31.201385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.209216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.209554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.430 [2024-07-14 14:09:31.209582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.430 [2024-07-14 14:09:31.216415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.430 [2024-07-14 14:09:31.216723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.216751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.223343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.223652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.223681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.230435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.230727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.230756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.237552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.237859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.237901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.244148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.244454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.244492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.250688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.251020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.251050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.257516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.257809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.257836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.264029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.264338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.264366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.270788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.271137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.271177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.277872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.278174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.278203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.284723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.285052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.285080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.291629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.291972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.292000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.298285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.298590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.298619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.304973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.305280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.305308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.311838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.312175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.312223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.318136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.318440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.318467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.323478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.323669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.323697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.329000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.329311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.329339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.334846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.335175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.335217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.340498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.340801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.340829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.345754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.346102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.346131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.351616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.351960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.351988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.358255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.358558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.358586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.364983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.365321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.365348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.371804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.372117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.372145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.378543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.378836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.378864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.384260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.384568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.384596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.390796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.391119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.391148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.397594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.397903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.397931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.404228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.404580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.404630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.431 [2024-07-14 14:09:31.411104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.431 [2024-07-14 14:09:31.411401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.431 [2024-07-14 14:09:31.411429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.417663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.417968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.417996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.424230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.424535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.424562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.430960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.431258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.431286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.437657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.437958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.437986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.443892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.444200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.444243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.450745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.451047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.451075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.457178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.457457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.457485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.462955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.463275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.463303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.468987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.469303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.469330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.475203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.475511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.475539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.481324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.481632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.481658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.487508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.487869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.487904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.493720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.494024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.494055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.499076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.499381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.499409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.504265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.504594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.504622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.509498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.509790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.509818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.514693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.514992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.515020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.519983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.520289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.520316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.525011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.525315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.525343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.530236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.530559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.530587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.535909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.536233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.536274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.541226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.541532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.541559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.546492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.546785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.692 [2024-07-14 14:09:31.546813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.692 [2024-07-14 14:09:31.551662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.692 [2024-07-14 14:09:31.552004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.552031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.556740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.557080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.557108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.561870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.562171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.562212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.567126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.567422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.567449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.572188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.572489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.572515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.577468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.577771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.577799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.582525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.582829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.582872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.587750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.588052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.588080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.593121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.593424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.593451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.599234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.599528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.599556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.606271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.606579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.606607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.612895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.613207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.613234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.619589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.619888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.619916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.627438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.627771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.627800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.633714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.634016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.634044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.638740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.639039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.639066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.643930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.644238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.644267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.649349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.649699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.649726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.654852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.655156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.655207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.660184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.660523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.660550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.665524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.665819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.665847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.693 [2024-07-14 14:09:31.670622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.693 [2024-07-14 14:09:31.670944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.693 [2024-07-14 14:09:31.670972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.954 [2024-07-14 14:09:31.675794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.954 [2024-07-14 14:09:31.676094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.954 [2024-07-14 14:09:31.676123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.954 [2024-07-14 14:09:31.680865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.954 [2024-07-14 14:09:31.681169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.954 [2024-07-14 14:09:31.681212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.954 [2024-07-14 14:09:31.686174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.954 [2024-07-14 14:09:31.686475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.954 [2024-07-14 14:09:31.686502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.954 [2024-07-14 14:09:31.691731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.954 [2024-07-14 14:09:31.692031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.954 [2024-07-14 14:09:31.692059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.954 [2024-07-14 14:09:31.698983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.954 [2024-07-14 14:09:31.699280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.954 [2024-07-14 14:09:31.699307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.954 [2024-07-14 14:09:31.705982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.954 [2024-07-14 14:09:31.706296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.954 [2024-07-14 14:09:31.706323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.954 [2024-07-14 14:09:31.713385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.954 [2024-07-14 14:09:31.713679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.954 [2024-07-14 14:09:31.713707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.954 [2024-07-14 14:09:31.719586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.954 [2024-07-14 14:09:31.719887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.954 [2024-07-14 14:09:31.719915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.954 [2024-07-14 14:09:31.725245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.954 [2024-07-14 14:09:31.725574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.954 [2024-07-14 14:09:31.725601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.954 [2024-07-14 14:09:31.731003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.954 [2024-07-14 14:09:31.731311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.954 [2024-07-14 14:09:31.731354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.954 [2024-07-14 14:09:31.736128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.954 [2024-07-14 14:09:31.736429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.736456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.741209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.741524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.741567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.746268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.746571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.746602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.751506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.751797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.751825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.757200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.757494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.757523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.763299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.763649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.763691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.770374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.770654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.770698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.777224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.777530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.777557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.784183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.784501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.784530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.791030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.791337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.791364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.797383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.797690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.797718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.804330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.804608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.804637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.811180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.811499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.811533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.818265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.818569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.818597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.826190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.826497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.826525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.832179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.832473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.832501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.837346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.837652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.837680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.843323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.843632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.843660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.848738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.849086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.849114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.853953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.854250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.854278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.859351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.859627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.859654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.865041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.865355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.865383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.870063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.870346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.870374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.875085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.875350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.875378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.879940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.880213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.880241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.884716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.884990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.885019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.889417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.889682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.889710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.894175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.894440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.955 [2024-07-14 14:09:31.894468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.955 [2024-07-14 14:09:31.898924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.955 [2024-07-14 14:09:31.899189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.956 [2024-07-14 14:09:31.899217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.956 [2024-07-14 14:09:31.903691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.956 [2024-07-14 14:09:31.903961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.956 [2024-07-14 14:09:31.903989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.956 [2024-07-14 14:09:31.908417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.956 [2024-07-14 14:09:31.908679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.956 [2024-07-14 14:09:31.908707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.956 [2024-07-14 14:09:31.913102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.956 [2024-07-14 14:09:31.913366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.956 [2024-07-14 14:09:31.913394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.956 [2024-07-14 14:09:31.917764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.956 [2024-07-14 14:09:31.918038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.956 [2024-07-14 14:09:31.918066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.956 [2024-07-14 14:09:31.922442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.956 [2024-07-14 14:09:31.922703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.956 [2024-07-14 14:09:31.922730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.956 [2024-07-14 14:09:31.927167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.956 [2024-07-14 14:09:31.927430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.956 [2024-07-14 14:09:31.927457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.956 [2024-07-14 14:09:31.931945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:53.956 [2024-07-14 14:09:31.932209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.956 [2024-07-14 14:09:31.932236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:31.936747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:31.937017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:31.937045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:31.941858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:31.942135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:31.942163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:31.946982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:31.947252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:31.947287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:31.952270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:31.952531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:31.952559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:31.958008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:31.958270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:31.958298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:31.963603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:31.963869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:31.963904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:31.969714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:31.969984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:31.970012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:31.975812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:31.976077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:31.976106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:31.982167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:31.982443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:31.982471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:31.988157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:31.988420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:31.988448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:31.994084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:31.994349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:31.994376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.000130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.000392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.000422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.006122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.006388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.006417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.012233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.012511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.012538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.017635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.017907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.017935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.023840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.024113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.024141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.030174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.030434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.030462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.036259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.036550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.036578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.042374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.042642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.042684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.047439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.047700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.047739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.052452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.052732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.052759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.057373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.057665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.057693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.062295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.062556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.062584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.067027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.067293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.067320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.071693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.071977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.072005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.076364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.076639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.216 [2024-07-14 14:09:32.076666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.216 [2024-07-14 14:09:32.082122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.216 [2024-07-14 14:09:32.082386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.082414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.087007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.087285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.087312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.091966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.092252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.092279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.096794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.097079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.097108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.101614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.101885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.101913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.106399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.106679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.106707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.111199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.111460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.111488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.116616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.116886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.116914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.122268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.122530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.122559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.127199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.127473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.127502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.132113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.132392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.132419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.136796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.137063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.137090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.141631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.141906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.141934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.146866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.147138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.147166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.152965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.153254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.153282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.158781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.159058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.159086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.164401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.164662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.164690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.169869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.170153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.170182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.176006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.176263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.176291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.181439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.181716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.181750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.186587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.186848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.186883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.192069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.217 [2024-07-14 14:09:32.192329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.217 [2024-07-14 14:09:32.192357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.217 [2024-07-14 14:09:32.197081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.197343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.197371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.202587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.202888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.202916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.207552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.207824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.207851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.212333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.212610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.212637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.217107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.217380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.217408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.222357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.222623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.222650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.228048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.228423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.228466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.235098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.235363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.235406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.240946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.241210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.241238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.246505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.246763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.246791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.251814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.252087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.252117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.256654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.256952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.256981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.261472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.261736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.261763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.266747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.267019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.267047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.272799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.273096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.273125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.277854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.278125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.477 [2024-07-14 14:09:32.278154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.477 [2024-07-14 14:09:32.282790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.477 [2024-07-14 14:09:32.283064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.283092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.287561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.287824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.287852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.292250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.292527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.292553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.297665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.297945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.297974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.303258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.303522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.303549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.308197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.308461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.308488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.313163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.313449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.313476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.318066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.318326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.318361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.322932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.323196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.323224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.327752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.328025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.328054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.332694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.332964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.332994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.337784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.338058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.338086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.342570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.342832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.342860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.347289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.347553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.347581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.351972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.352251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.352278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.356865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.357144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.357171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.361787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.362082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.362110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.367566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.367828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.367856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.372640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.372944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.372972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.377662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.377937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.377965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.382531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.382796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.382824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.387772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.388043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.388072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.393073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.393335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.393362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.397859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.398129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.398157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.402816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.403085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.403120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.407653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.407924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.407952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.412375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.412638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.412666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.417194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.417466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.417494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.421979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.422259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.478 [2024-07-14 14:09:32.422286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.478 [2024-07-14 14:09:32.426733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.478 [2024-07-14 14:09:32.427002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.479 [2024-07-14 14:09:32.427031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.479 [2024-07-14 14:09:32.431456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.479 [2024-07-14 14:09:32.431718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.479 [2024-07-14 14:09:32.431746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.479 [2024-07-14 14:09:32.436278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.479 [2024-07-14 14:09:32.436555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.479 [2024-07-14 14:09:32.436582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.479 [2024-07-14 14:09:32.441113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.479 [2024-07-14 14:09:32.441403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.479 [2024-07-14 14:09:32.441431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.479 [2024-07-14 14:09:32.446028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.479 [2024-07-14 14:09:32.446298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.479 [2024-07-14 14:09:32.446340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.479 [2024-07-14 14:09:32.450978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.479 [2024-07-14 14:09:32.451254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.479 [2024-07-14 14:09:32.451281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.479 [2024-07-14 14:09:32.455725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.479 [2024-07-14 14:09:32.455997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.479 [2024-07-14 14:09:32.456025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.738 [2024-07-14 14:09:32.460592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.738 [2024-07-14 14:09:32.460892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.738 [2024-07-14 14:09:32.460920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.738 [2024-07-14 14:09:32.465436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2348e90) with pdu=0x2000190fef90 00:33:54.738 [2024-07-14 14:09:32.465699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.738 [2024-07-14 14:09:32.465728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.738 00:33:54.738 Latency(us) 00:33:54.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.738 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:54.738 nvme0n1 : 2.00 5403.69 675.46 0.00 0.00 2953.90 2257.35 12913.02 00:33:54.738 =================================================================================================================== 00:33:54.738 Total : 5403.69 675.46 0.00 0.00 2953.90 2257.35 12913.02 00:33:54.738 0 00:33:54.738 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:54.738 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:54.738 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:54.738 | .driver_specific 00:33:54.738 | .nvme_error 00:33:54.738 | .status_code 00:33:54.738 | .command_transient_transport_error' 00:33:54.738 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 348 > 0 )) 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1606404 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1606404 ']' 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1606404 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1606404 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1606404' 00:33:54.996 killing process with pid 1606404 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1606404 00:33:54.996 Received shutdown signal, test time was about 2.000000 seconds 00:33:54.996 00:33:54.996 Latency(us) 00:33:54.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.996 =================================================================================================================== 00:33:54.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:54.996 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1606404 00:33:55.256 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1605045 00:33:55.256 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1605045 ']' 00:33:55.256 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1605045 00:33:55.256 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:55.256 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:55.256 14:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1605045 00:33:55.256 14:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:55.256 14:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:55.256 14:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1605045' 00:33:55.256 killing process with pid 1605045 00:33:55.256 14:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1605045 00:33:55.256 14:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1605045 00:33:55.515 00:33:55.515 real 0m15.091s 00:33:55.515 user 0m29.221s 00:33:55.515 sys 0m4.278s 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.515 ************************************ 00:33:55.515 END TEST nvmf_digest_error 00:33:55.515 ************************************ 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:55.515 rmmod nvme_tcp 00:33:55.515 rmmod nvme_fabrics 00:33:55.515 rmmod nvme_keyring 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1605045 ']' 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1605045 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 1605045 ']' 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 1605045 00:33:55.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1605045) - No such process 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 1605045 is not found' 00:33:55.515 Process with pid 1605045 is not found 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:55.515 14:09:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.413 14:09:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:57.413 00:33:57.413 real 0m34.519s 00:33:57.413 user 0m59.970s 00:33:57.413 sys 0m10.068s 00:33:57.413 14:09:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:57.413 14:09:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:57.413 ************************************ 00:33:57.413 END TEST nvmf_digest 00:33:57.413 ************************************ 00:33:57.672 14:09:35 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:57.672 14:09:35 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:57.672 14:09:35 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:57.672 14:09:35 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:57.672 14:09:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:57.672 14:09:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:57.672 14:09:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.672 ************************************ 00:33:57.672 START TEST nvmf_bdevperf 00:33:57.672 ************************************ 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:57.672 * Looking for test storage... 00:33:57.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:57.672 14:09:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.673 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:57.673 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:57.673 14:09:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:57.673 14:09:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:59.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:59.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:59.576 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:59.577 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:59.577 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:59.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:59.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:33:59.577 00:33:59.577 --- 10.0.0.2 ping statistics --- 00:33:59.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.577 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:59.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:59.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:33:59.577 00:33:59.577 --- 10.0.0.1 ping statistics --- 00:33:59.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.577 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:59.577 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1608748 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1608748 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1608748 ']' 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:59.850 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.850 [2024-07-14 14:09:37.630106] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:59.850 [2024-07-14 14:09:37.630182] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.850 EAL: No free 2048 kB hugepages reported on node 1 00:33:59.850 [2024-07-14 14:09:37.694788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:59.850 [2024-07-14 14:09:37.782366] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.850 [2024-07-14 14:09:37.782436] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.850 [2024-07-14 14:09:37.782467] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.850 [2024-07-14 14:09:37.782478] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.850 [2024-07-14 14:09:37.782487] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.850 [2024-07-14 14:09:37.782617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:59.850 [2024-07-14 14:09:37.782681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:59.850 [2024-07-14 14:09:37.782684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:00.156 [2024-07-14 14:09:37.928142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:00.156 Malloc0 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:00.156 [2024-07-14 14:09:37.988154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:00.156 { 00:34:00.156 "params": { 00:34:00.156 "name": "Nvme$subsystem", 00:34:00.156 "trtype": "$TEST_TRANSPORT", 00:34:00.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:00.156 "adrfam": "ipv4", 00:34:00.156 "trsvcid": "$NVMF_PORT", 00:34:00.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:00.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:00.156 "hdgst": ${hdgst:-false}, 00:34:00.156 "ddgst": ${ddgst:-false} 00:34:00.156 }, 00:34:00.156 "method": "bdev_nvme_attach_controller" 00:34:00.156 } 00:34:00.156 EOF 00:34:00.156 )") 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:00.156 14:09:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:00.156 "params": { 00:34:00.156 "name": "Nvme1", 00:34:00.156 "trtype": "tcp", 00:34:00.156 "traddr": "10.0.0.2", 00:34:00.156 "adrfam": "ipv4", 00:34:00.156 "trsvcid": "4420", 00:34:00.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:00.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:00.156 "hdgst": false, 00:34:00.156 "ddgst": false 00:34:00.156 }, 00:34:00.156 "method": "bdev_nvme_attach_controller" 00:34:00.156 }' 00:34:00.156 [2024-07-14 14:09:38.038444] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:00.156 [2024-07-14 14:09:38.038513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608782 ] 00:34:00.156 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.156 [2024-07-14 14:09:38.102693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.421 [2024-07-14 14:09:38.191188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:00.421 Running I/O for 1 seconds... 00:34:01.796 00:34:01.796 Latency(us) 00:34:01.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.796 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:01.796 Verification LBA range: start 0x0 length 0x4000 00:34:01.796 Nvme1n1 : 1.01 8875.07 34.67 0.00 0.00 14358.40 1577.72 16117.00 00:34:01.796 =================================================================================================================== 00:34:01.796 Total : 8875.07 34.67 0.00 0.00 14358.40 1577.72 16117.00 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1609036 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:01.796 { 00:34:01.796 "params": { 00:34:01.796 "name": "Nvme$subsystem", 00:34:01.796 "trtype": "$TEST_TRANSPORT", 00:34:01.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.796 "adrfam": "ipv4", 00:34:01.796 "trsvcid": "$NVMF_PORT", 00:34:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.796 "hdgst": ${hdgst:-false}, 00:34:01.796 "ddgst": ${ddgst:-false} 00:34:01.796 }, 00:34:01.796 "method": "bdev_nvme_attach_controller" 00:34:01.796 } 00:34:01.796 EOF 00:34:01.796 )") 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:01.796 14:09:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:01.796 "params": { 00:34:01.796 "name": "Nvme1", 00:34:01.796 "trtype": "tcp", 00:34:01.796 "traddr": "10.0.0.2", 00:34:01.796 "adrfam": "ipv4", 00:34:01.796 "trsvcid": "4420", 00:34:01.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:01.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:01.796 "hdgst": false, 00:34:01.796 "ddgst": false 00:34:01.796 }, 00:34:01.796 "method": "bdev_nvme_attach_controller" 00:34:01.796 }' 00:34:01.796 [2024-07-14 14:09:39.684713] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:01.796 [2024-07-14 14:09:39.684788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609036 ] 00:34:01.796 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.796 [2024-07-14 14:09:39.745169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.056 [2024-07-14 14:09:39.832591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.314 Running I/O for 15 seconds... 00:34:04.852 14:09:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1608748 00:34:04.852 14:09:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:04.852 [2024-07-14 14:09:42.651987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.652979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.652994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.653008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.653023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.653036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.653051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.653065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.653080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.653094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.653109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.653122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.653137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.653151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.653165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.653194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.653212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.653227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.653244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.653263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.653280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.653295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.852 [2024-07-14 14:09:42.653312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.852 [2024-07-14 14:09:42.653327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.853 [2024-07-14 14:09:42.653609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.853 [2024-07-14 14:09:42.653640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.853 [2024-07-14 14:09:42.653676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.853 [2024-07-14 14:09:42.653707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.853 [2024-07-14 14:09:42.653738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.853 [2024-07-14 14:09:42.653770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.853 [2024-07-14 14:09:42.653801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.653979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.653994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.853 [2024-07-14 14:09:42.654662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.853 [2024-07-14 14:09:42.654678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.654694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.654709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.654725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.654740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.654756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.654777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.654795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.654810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.654827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.654841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.654858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.854 [2024-07-14 14:09:42.654884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.654904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.854 [2024-07-14 14:09:42.654936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.654951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.854 [2024-07-14 14:09:42.654965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.654980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.854 [2024-07-14 14:09:42.654993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.854 [2024-07-14 14:09:42.655021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.854 [2024-07-14 14:09:42.655049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.854 [2024-07-14 14:09:42.655077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.854 [2024-07-14 14:09:42.655105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.655982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.655998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.656011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.656025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.656039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.656054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.854 [2024-07-14 14:09:42.656068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.854 [2024-07-14 14:09:42.656082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.855 [2024-07-14 14:09:42.656095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.855 [2024-07-14 14:09:42.656110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.855 [2024-07-14 14:09:42.656127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.855 [2024-07-14 14:09:42.656142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.855 [2024-07-14 14:09:42.656171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.855 [2024-07-14 14:09:42.656186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.855 [2024-07-14 14:09:42.656199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.855 [2024-07-14 14:09:42.656226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174a9a0 is same with the state(5) to be set 00:34:04.855 [2024-07-14 14:09:42.656245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.855 [2024-07-14 14:09:42.656257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.855 [2024-07-14 14:09:42.656269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44792 len:8 PRP1 0x0 PRP2 0x0 00:34:04.855 [2024-07-14 14:09:42.656283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.855 [2024-07-14 14:09:42.656348] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x174a9a0 was disconnected and freed. reset controller. 00:34:04.855 [2024-07-14 14:09:42.660199] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.855 [2024-07-14 14:09:42.660304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.855 [2024-07-14 14:09:42.660964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.855 [2024-07-14 14:09:42.660995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.855 [2024-07-14 14:09:42.661012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.855 [2024-07-14 14:09:42.661252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.855 [2024-07-14 14:09:42.661497] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.855 [2024-07-14 14:09:42.661521] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.855 [2024-07-14 14:09:42.661539] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.855 [2024-07-14 14:09:42.665146] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.855 [2024-07-14 14:09:42.674538] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.855 [2024-07-14 14:09:42.674923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.855 [2024-07-14 14:09:42.674969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.855 [2024-07-14 14:09:42.674985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.855 [2024-07-14 14:09:42.675224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.855 [2024-07-14 14:09:42.675475] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.855 [2024-07-14 14:09:42.675500] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.855 [2024-07-14 14:09:42.675516] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.855 [2024-07-14 14:09:42.679093] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.855 [2024-07-14 14:09:42.688382] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.855 [2024-07-14 14:09:42.688778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.855 [2024-07-14 14:09:42.688809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.855 [2024-07-14 14:09:42.688827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.855 [2024-07-14 14:09:42.689075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.855 [2024-07-14 14:09:42.689319] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.855 [2024-07-14 14:09:42.689343] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.855 [2024-07-14 14:09:42.689359] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.855 [2024-07-14 14:09:42.692939] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.855 [2024-07-14 14:09:42.702231] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.855 [2024-07-14 14:09:42.702627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.855 [2024-07-14 14:09:42.702659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.855 [2024-07-14 14:09:42.702676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.855 [2024-07-14 14:09:42.702924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.855 [2024-07-14 14:09:42.703167] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.855 [2024-07-14 14:09:42.703192] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.855 [2024-07-14 14:09:42.703208] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.855 [2024-07-14 14:09:42.706778] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.855 [2024-07-14 14:09:42.716271] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.855 [2024-07-14 14:09:42.716637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.855 [2024-07-14 14:09:42.716668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.855 [2024-07-14 14:09:42.716685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.855 [2024-07-14 14:09:42.716935] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.855 [2024-07-14 14:09:42.717178] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.855 [2024-07-14 14:09:42.717202] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.855 [2024-07-14 14:09:42.717217] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.855 [2024-07-14 14:09:42.720791] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.855 [2024-07-14 14:09:42.730298] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.855 [2024-07-14 14:09:42.730684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.855 [2024-07-14 14:09:42.730714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.855 [2024-07-14 14:09:42.730738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.855 [2024-07-14 14:09:42.730986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.855 [2024-07-14 14:09:42.731230] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.855 [2024-07-14 14:09:42.731254] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.855 [2024-07-14 14:09:42.731269] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.855 [2024-07-14 14:09:42.734840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.855 [2024-07-14 14:09:42.744357] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.855 [2024-07-14 14:09:42.744726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.855 [2024-07-14 14:09:42.744757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.855 [2024-07-14 14:09:42.744775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.855 [2024-07-14 14:09:42.745022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.855 [2024-07-14 14:09:42.745266] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.855 [2024-07-14 14:09:42.745290] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.855 [2024-07-14 14:09:42.745305] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.855 [2024-07-14 14:09:42.748882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.855 [2024-07-14 14:09:42.758358] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.855 [2024-07-14 14:09:42.758738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.855 [2024-07-14 14:09:42.758769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.855 [2024-07-14 14:09:42.758787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.855 [2024-07-14 14:09:42.759036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.855 [2024-07-14 14:09:42.759280] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.855 [2024-07-14 14:09:42.759304] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.855 [2024-07-14 14:09:42.759319] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.855 [2024-07-14 14:09:42.762891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.855 [2024-07-14 14:09:42.772365] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.855 [2024-07-14 14:09:42.772758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.855 [2024-07-14 14:09:42.772789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.855 [2024-07-14 14:09:42.772806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.855 [2024-07-14 14:09:42.773054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.855 [2024-07-14 14:09:42.773306] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.855 [2024-07-14 14:09:42.773330] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.855 [2024-07-14 14:09:42.773346] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.855 [2024-07-14 14:09:42.776920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.855 [2024-07-14 14:09:42.786394] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.855 [2024-07-14 14:09:42.786762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.856 [2024-07-14 14:09:42.786794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.856 [2024-07-14 14:09:42.786811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.856 [2024-07-14 14:09:42.787060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.856 [2024-07-14 14:09:42.787304] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.856 [2024-07-14 14:09:42.787328] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.856 [2024-07-14 14:09:42.787344] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.856 [2024-07-14 14:09:42.790921] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.856 [2024-07-14 14:09:42.800400] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.856 [2024-07-14 14:09:42.800765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.856 [2024-07-14 14:09:42.800795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.856 [2024-07-14 14:09:42.800813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.856 [2024-07-14 14:09:42.801062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.856 [2024-07-14 14:09:42.801306] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.856 [2024-07-14 14:09:42.801330] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.856 [2024-07-14 14:09:42.801345] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.856 [2024-07-14 14:09:42.804919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.856 [2024-07-14 14:09:42.814391] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.856 [2024-07-14 14:09:42.814790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.856 [2024-07-14 14:09:42.814821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.856 [2024-07-14 14:09:42.814838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.856 [2024-07-14 14:09:42.815086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.856 [2024-07-14 14:09:42.815330] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.856 [2024-07-14 14:09:42.815353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.856 [2024-07-14 14:09:42.815369] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.856 [2024-07-14 14:09:42.818942] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.856 [2024-07-14 14:09:42.828426] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.856 [2024-07-14 14:09:42.828816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.856 [2024-07-14 14:09:42.828847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:04.856 [2024-07-14 14:09:42.828864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:04.856 [2024-07-14 14:09:42.829111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:04.856 [2024-07-14 14:09:42.829354] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.856 [2024-07-14 14:09:42.829378] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.856 [2024-07-14 14:09:42.829394] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.114 [2024-07-14 14:09:42.832970] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.114 [2024-07-14 14:09:42.842448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.114 [2024-07-14 14:09:42.842832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.114 [2024-07-14 14:09:42.842863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.114 [2024-07-14 14:09:42.842890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.114 [2024-07-14 14:09:42.843129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.114 [2024-07-14 14:09:42.843372] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.114 [2024-07-14 14:09:42.843396] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.114 [2024-07-14 14:09:42.843411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.114 [2024-07-14 14:09:42.846989] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.114 [2024-07-14 14:09:42.856467] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.114 [2024-07-14 14:09:42.856834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.114 [2024-07-14 14:09:42.856864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.114 [2024-07-14 14:09:42.856891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.114 [2024-07-14 14:09:42.857130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.114 [2024-07-14 14:09:42.857373] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.114 [2024-07-14 14:09:42.857397] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.114 [2024-07-14 14:09:42.857413] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.114 [2024-07-14 14:09:42.860992] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.114 [2024-07-14 14:09:42.870472] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.115 [2024-07-14 14:09:42.870842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.115 [2024-07-14 14:09:42.870873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.115 [2024-07-14 14:09:42.870907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.115 [2024-07-14 14:09:42.871146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.115 [2024-07-14 14:09:42.871389] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.115 [2024-07-14 14:09:42.871413] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.115 [2024-07-14 14:09:42.871428] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.115 [2024-07-14 14:09:42.875007] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.115 [2024-07-14 14:09:42.884488] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.115 [2024-07-14 14:09:42.884884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.115 [2024-07-14 14:09:42.884916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.115 [2024-07-14 14:09:42.884933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.115 [2024-07-14 14:09:42.885171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.115 [2024-07-14 14:09:42.885415] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.115 [2024-07-14 14:09:42.885438] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.115 [2024-07-14 14:09:42.885454] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.115 [2024-07-14 14:09:42.889046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.115 [2024-07-14 14:09:42.898531] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.115 [2024-07-14 14:09:42.898937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.115 [2024-07-14 14:09:42.898969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.115 [2024-07-14 14:09:42.898987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.115 [2024-07-14 14:09:42.899224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.115 [2024-07-14 14:09:42.899467] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.115 [2024-07-14 14:09:42.899491] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.115 [2024-07-14 14:09:42.899507] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.115 [2024-07-14 14:09:42.903084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.115 [2024-07-14 14:09:42.912565] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.115 [2024-07-14 14:09:42.912969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.115 [2024-07-14 14:09:42.913000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.115 [2024-07-14 14:09:42.913017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.115 [2024-07-14 14:09:42.913255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.115 [2024-07-14 14:09:42.913497] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.115 [2024-07-14 14:09:42.913526] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.115 [2024-07-14 14:09:42.913543] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.115 [2024-07-14 14:09:42.917120] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.115 [2024-07-14 14:09:42.926598] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.115 [2024-07-14 14:09:42.926994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.115 [2024-07-14 14:09:42.927025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.115 [2024-07-14 14:09:42.927043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.115 [2024-07-14 14:09:42.927280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.115 [2024-07-14 14:09:42.927523] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.115 [2024-07-14 14:09:42.927546] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.115 [2024-07-14 14:09:42.927561] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.115 [2024-07-14 14:09:42.931141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.115 [2024-07-14 14:09:42.940617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.115 [2024-07-14 14:09:42.940991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.115 [2024-07-14 14:09:42.941022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.115 [2024-07-14 14:09:42.941039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.115 [2024-07-14 14:09:42.941277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.115 [2024-07-14 14:09:42.941520] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.115 [2024-07-14 14:09:42.941544] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.115 [2024-07-14 14:09:42.941559] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.115 [2024-07-14 14:09:42.945140] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.115 [2024-07-14 14:09:42.954632] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.115 [2024-07-14 14:09:42.954999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.115 [2024-07-14 14:09:42.955030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.115 [2024-07-14 14:09:42.955048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.115 [2024-07-14 14:09:42.955285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.115 [2024-07-14 14:09:42.955529] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.115 [2024-07-14 14:09:42.955553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.115 [2024-07-14 14:09:42.955568] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.115 [2024-07-14 14:09:42.959145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.115 [2024-07-14 14:09:42.968618] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.115 [2024-07-14 14:09:42.969021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.115 [2024-07-14 14:09:42.969052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.115 [2024-07-14 14:09:42.969069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.115 [2024-07-14 14:09:42.969306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.115 [2024-07-14 14:09:42.969548] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.115 [2024-07-14 14:09:42.969573] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.115 [2024-07-14 14:09:42.969588] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.115 [2024-07-14 14:09:42.973166] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.115 [2024-07-14 14:09:42.982644] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.115 [2024-07-14 14:09:42.983038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.115 [2024-07-14 14:09:42.983069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.115 [2024-07-14 14:09:42.983086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.115 [2024-07-14 14:09:42.983323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.115 [2024-07-14 14:09:42.983566] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.115 [2024-07-14 14:09:42.983591] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.115 [2024-07-14 14:09:42.983606] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.115 [2024-07-14 14:09:42.987195] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.115 [2024-07-14 14:09:42.996491] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.115 [2024-07-14 14:09:42.996860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.115 [2024-07-14 14:09:42.996903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.115 [2024-07-14 14:09:42.996921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.115 [2024-07-14 14:09:42.997159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.115 [2024-07-14 14:09:42.997402] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.115 [2024-07-14 14:09:42.997426] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.115 [2024-07-14 14:09:42.997441] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.115 [2024-07-14 14:09:43.001020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.116 [2024-07-14 14:09:43.010510] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.116 [2024-07-14 14:09:43.010866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.116 [2024-07-14 14:09:43.010903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.116 [2024-07-14 14:09:43.010921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.116 [2024-07-14 14:09:43.011164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.116 [2024-07-14 14:09:43.011408] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.116 [2024-07-14 14:09:43.011432] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.116 [2024-07-14 14:09:43.011447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.116 [2024-07-14 14:09:43.015026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.116 [2024-07-14 14:09:43.024508] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.116 [2024-07-14 14:09:43.024899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.116 [2024-07-14 14:09:43.024930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.116 [2024-07-14 14:09:43.024948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.116 [2024-07-14 14:09:43.025185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.116 [2024-07-14 14:09:43.025428] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.116 [2024-07-14 14:09:43.025452] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.116 [2024-07-14 14:09:43.025467] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.116 [2024-07-14 14:09:43.029046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.116 [2024-07-14 14:09:43.038518] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.116 [2024-07-14 14:09:43.038905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.116 [2024-07-14 14:09:43.038937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.116 [2024-07-14 14:09:43.038954] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.116 [2024-07-14 14:09:43.039192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.116 [2024-07-14 14:09:43.039434] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.116 [2024-07-14 14:09:43.039458] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.116 [2024-07-14 14:09:43.039474] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.116 [2024-07-14 14:09:43.043053] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.116 [2024-07-14 14:09:43.052529] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.116 [2024-07-14 14:09:43.052941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.116 [2024-07-14 14:09:43.052973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.116 [2024-07-14 14:09:43.052991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.116 [2024-07-14 14:09:43.053229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.116 [2024-07-14 14:09:43.053472] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.116 [2024-07-14 14:09:43.053496] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.116 [2024-07-14 14:09:43.053517] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.116 [2024-07-14 14:09:43.057134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.116 [2024-07-14 14:09:43.066398] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.116 [2024-07-14 14:09:43.066768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.116 [2024-07-14 14:09:43.066799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.116 [2024-07-14 14:09:43.066816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.116 [2024-07-14 14:09:43.067064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.116 [2024-07-14 14:09:43.067308] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.116 [2024-07-14 14:09:43.067332] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.116 [2024-07-14 14:09:43.067346] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.116 [2024-07-14 14:09:43.070924] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.116 [2024-07-14 14:09:43.080402] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.116 [2024-07-14 14:09:43.080765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.116 [2024-07-14 14:09:43.080796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.116 [2024-07-14 14:09:43.080814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.116 [2024-07-14 14:09:43.081062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.116 [2024-07-14 14:09:43.081306] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.116 [2024-07-14 14:09:43.081330] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.116 [2024-07-14 14:09:43.081345] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.116 [2024-07-14 14:09:43.084920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.116 [2024-07-14 14:09:43.094400] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.116 [2024-07-14 14:09:43.094796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.116 [2024-07-14 14:09:43.094826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.116 [2024-07-14 14:09:43.094844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.116 [2024-07-14 14:09:43.095096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.116 [2024-07-14 14:09:43.095340] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.116 [2024-07-14 14:09:43.095364] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.116 [2024-07-14 14:09:43.095380] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.381 [2024-07-14 14:09:43.098957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.381 [2024-07-14 14:09:43.108438] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.381 [2024-07-14 14:09:43.108833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-14 14:09:43.108869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.381 [2024-07-14 14:09:43.108898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.381 [2024-07-14 14:09:43.109137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.381 [2024-07-14 14:09:43.109380] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.382 [2024-07-14 14:09:43.109404] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.382 [2024-07-14 14:09:43.109419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.382 [2024-07-14 14:09:43.112995] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.382 [2024-07-14 14:09:43.122477] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.382 [2024-07-14 14:09:43.122865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-14 14:09:43.122902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.382 [2024-07-14 14:09:43.122920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.382 [2024-07-14 14:09:43.123158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.382 [2024-07-14 14:09:43.123401] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.382 [2024-07-14 14:09:43.123425] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.382 [2024-07-14 14:09:43.123440] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.382 [2024-07-14 14:09:43.127019] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.382 [2024-07-14 14:09:43.136495] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.382 [2024-07-14 14:09:43.136888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-14 14:09:43.136919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.382 [2024-07-14 14:09:43.136937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.382 [2024-07-14 14:09:43.137174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.382 [2024-07-14 14:09:43.137418] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.382 [2024-07-14 14:09:43.137441] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.383 [2024-07-14 14:09:43.137457] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.383 [2024-07-14 14:09:43.141045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.383 [2024-07-14 14:09:43.150520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.383 [2024-07-14 14:09:43.150897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-14 14:09:43.150928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.383 [2024-07-14 14:09:43.150945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.383 [2024-07-14 14:09:43.151183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.383 [2024-07-14 14:09:43.151432] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.383 [2024-07-14 14:09:43.151456] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.383 [2024-07-14 14:09:43.151472] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.383 [2024-07-14 14:09:43.155050] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.383 [2024-07-14 14:09:43.164546] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.383 [2024-07-14 14:09:43.164905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-14 14:09:43.164936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.383 [2024-07-14 14:09:43.164954] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.383 [2024-07-14 14:09:43.165191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.383 [2024-07-14 14:09:43.165435] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.384 [2024-07-14 14:09:43.165459] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.384 [2024-07-14 14:09:43.165474] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.384 [2024-07-14 14:09:43.169054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.384 [2024-07-14 14:09:43.178530] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.384 [2024-07-14 14:09:43.178893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-14 14:09:43.178924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.384 [2024-07-14 14:09:43.178941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.384 [2024-07-14 14:09:43.179179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.384 [2024-07-14 14:09:43.179422] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.384 [2024-07-14 14:09:43.179445] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.384 [2024-07-14 14:09:43.179461] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.384 [2024-07-14 14:09:43.183040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.385 [2024-07-14 14:09:43.192517] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.385 [2024-07-14 14:09:43.192905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-14 14:09:43.192936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.385 [2024-07-14 14:09:43.192953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.385 [2024-07-14 14:09:43.193190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.385 [2024-07-14 14:09:43.193433] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.385 [2024-07-14 14:09:43.193457] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.385 [2024-07-14 14:09:43.193472] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.385 [2024-07-14 14:09:43.197061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.385 [2024-07-14 14:09:43.206535] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.385 [2024-07-14 14:09:43.206921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-14 14:09:43.206952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.385 [2024-07-14 14:09:43.206970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.385 [2024-07-14 14:09:43.207207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.385 [2024-07-14 14:09:43.207450] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.385 [2024-07-14 14:09:43.207474] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.385 [2024-07-14 14:09:43.207489] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.385 [2024-07-14 14:09:43.211067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.386 [2024-07-14 14:09:43.220536] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.386 [2024-07-14 14:09:43.220916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-14 14:09:43.220947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.386 [2024-07-14 14:09:43.220964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.386 [2024-07-14 14:09:43.221202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.386 [2024-07-14 14:09:43.221445] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.386 [2024-07-14 14:09:43.221469] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.386 [2024-07-14 14:09:43.221484] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.386 [2024-07-14 14:09:43.225063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.386 [2024-07-14 14:09:43.234533] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.386 [2024-07-14 14:09:43.234903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-14 14:09:43.234935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.386 [2024-07-14 14:09:43.234952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.387 [2024-07-14 14:09:43.235190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.387 [2024-07-14 14:09:43.235434] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.387 [2024-07-14 14:09:43.235457] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.387 [2024-07-14 14:09:43.235472] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.387 [2024-07-14 14:09:43.239051] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.387 [2024-07-14 14:09:43.248529] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.387 [2024-07-14 14:09:43.248904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-14 14:09:43.248936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.387 [2024-07-14 14:09:43.248958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.387 [2024-07-14 14:09:43.249197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.387 [2024-07-14 14:09:43.249440] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.387 [2024-07-14 14:09:43.249464] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.387 [2024-07-14 14:09:43.249479] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.387 [2024-07-14 14:09:43.253059] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.387 [2024-07-14 14:09:43.262535] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.387 [2024-07-14 14:09:43.262920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-14 14:09:43.262952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.388 [2024-07-14 14:09:43.262970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.388 [2024-07-14 14:09:43.263209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.388 [2024-07-14 14:09:43.263452] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.388 [2024-07-14 14:09:43.263476] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.388 [2024-07-14 14:09:43.263491] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.388 [2024-07-14 14:09:43.267070] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.388 [2024-07-14 14:09:43.276544] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.388 [2024-07-14 14:09:43.276936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-14 14:09:43.276967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.388 [2024-07-14 14:09:43.276984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.389 [2024-07-14 14:09:43.277222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.389 [2024-07-14 14:09:43.277465] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.389 [2024-07-14 14:09:43.277489] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.389 [2024-07-14 14:09:43.277504] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.389 [2024-07-14 14:09:43.281085] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.389 [2024-07-14 14:09:43.290559] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.389 [2024-07-14 14:09:43.290951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-14 14:09:43.290982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.389 [2024-07-14 14:09:43.291000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.389 [2024-07-14 14:09:43.291237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.389 [2024-07-14 14:09:43.291480] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.389 [2024-07-14 14:09:43.291509] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.389 [2024-07-14 14:09:43.291525] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.390 [2024-07-14 14:09:43.295110] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.390 [2024-07-14 14:09:43.304594] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.390 [2024-07-14 14:09:43.304976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-14 14:09:43.305007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.390 [2024-07-14 14:09:43.305025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.390 [2024-07-14 14:09:43.305262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.390 [2024-07-14 14:09:43.305505] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.390 [2024-07-14 14:09:43.305528] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.390 [2024-07-14 14:09:43.305543] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.390 [2024-07-14 14:09:43.309123] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.390 [2024-07-14 14:09:43.318610] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.390 [2024-07-14 14:09:43.319007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-14 14:09:43.319039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.391 [2024-07-14 14:09:43.319057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.391 [2024-07-14 14:09:43.319295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.391 [2024-07-14 14:09:43.319538] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.391 [2024-07-14 14:09:43.319563] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.391 [2024-07-14 14:09:43.319578] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.391 [2024-07-14 14:09:43.323155] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.391 [2024-07-14 14:09:43.332633] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.391 [2024-07-14 14:09:43.333015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-14 14:09:43.333046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.391 [2024-07-14 14:09:43.333064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.391 [2024-07-14 14:09:43.333302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.392 [2024-07-14 14:09:43.333545] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.392 [2024-07-14 14:09:43.333570] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.392 [2024-07-14 14:09:43.333585] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.392 [2024-07-14 14:09:43.337163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.392 [2024-07-14 14:09:43.346649] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.392 [2024-07-14 14:09:43.347048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-14 14:09:43.347079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.392 [2024-07-14 14:09:43.347096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.392 [2024-07-14 14:09:43.347334] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.392 [2024-07-14 14:09:43.347577] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.392 [2024-07-14 14:09:43.347601] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.392 [2024-07-14 14:09:43.347617] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.392 [2024-07-14 14:09:43.351197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.677 [2024-07-14 14:09:43.360525] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.677 [2024-07-14 14:09:43.360951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.677 [2024-07-14 14:09:43.360983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.677 [2024-07-14 14:09:43.361001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.677 [2024-07-14 14:09:43.361238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.677 [2024-07-14 14:09:43.361482] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.677 [2024-07-14 14:09:43.361506] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.677 [2024-07-14 14:09:43.361521] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.677 [2024-07-14 14:09:43.365109] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.677 [2024-07-14 14:09:43.374384] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.677 [2024-07-14 14:09:43.374749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.677 [2024-07-14 14:09:43.374781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.677 [2024-07-14 14:09:43.374798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.677 [2024-07-14 14:09:43.375045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.677 [2024-07-14 14:09:43.375290] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.677 [2024-07-14 14:09:43.375314] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.677 [2024-07-14 14:09:43.375330] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.677 [2024-07-14 14:09:43.378945] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.678 [2024-07-14 14:09:43.388213] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.678 [2024-07-14 14:09:43.388599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.678 [2024-07-14 14:09:43.388630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.678 [2024-07-14 14:09:43.388652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.678 [2024-07-14 14:09:43.388901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.678 [2024-07-14 14:09:43.389144] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.678 [2024-07-14 14:09:43.389168] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.678 [2024-07-14 14:09:43.389188] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.678 [2024-07-14 14:09:43.392757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.678 [2024-07-14 14:09:43.402245] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.678 [2024-07-14 14:09:43.402642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.678 [2024-07-14 14:09:43.402673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.678 [2024-07-14 14:09:43.402691] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.678 [2024-07-14 14:09:43.402937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.678 [2024-07-14 14:09:43.403181] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.678 [2024-07-14 14:09:43.403205] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.678 [2024-07-14 14:09:43.403220] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.678 [2024-07-14 14:09:43.406797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.678 [2024-07-14 14:09:43.416284] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.678 [2024-07-14 14:09:43.416646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.678 [2024-07-14 14:09:43.416676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.678 [2024-07-14 14:09:43.416694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.678 [2024-07-14 14:09:43.416940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.678 [2024-07-14 14:09:43.417184] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.678 [2024-07-14 14:09:43.417208] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.678 [2024-07-14 14:09:43.417224] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.678 [2024-07-14 14:09:43.420810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.678 [2024-07-14 14:09:43.430298] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.678 [2024-07-14 14:09:43.430709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.678 [2024-07-14 14:09:43.430740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.678 [2024-07-14 14:09:43.430757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.678 [2024-07-14 14:09:43.431006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.678 [2024-07-14 14:09:43.431249] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.678 [2024-07-14 14:09:43.431279] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.678 [2024-07-14 14:09:43.431295] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.678 [2024-07-14 14:09:43.434868] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.678 [2024-07-14 14:09:43.444148] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.678 [2024-07-14 14:09:43.444508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.678 [2024-07-14 14:09:43.444538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.678 [2024-07-14 14:09:43.444556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.678 [2024-07-14 14:09:43.444793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.678 [2024-07-14 14:09:43.445045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.678 [2024-07-14 14:09:43.445069] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.678 [2024-07-14 14:09:43.445085] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.678 [2024-07-14 14:09:43.448655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.678 [2024-07-14 14:09:43.458145] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.678 [2024-07-14 14:09:43.458516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.678 [2024-07-14 14:09:43.458547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.678 [2024-07-14 14:09:43.458565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.678 [2024-07-14 14:09:43.458802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.678 [2024-07-14 14:09:43.459058] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.678 [2024-07-14 14:09:43.459084] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.678 [2024-07-14 14:09:43.459100] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.678 [2024-07-14 14:09:43.462671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.678 [2024-07-14 14:09:43.472165] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.678 [2024-07-14 14:09:43.472570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.678 [2024-07-14 14:09:43.472601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.678 [2024-07-14 14:09:43.472618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.678 [2024-07-14 14:09:43.472855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.678 [2024-07-14 14:09:43.473107] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.678 [2024-07-14 14:09:43.473131] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.678 [2024-07-14 14:09:43.473147] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.678 [2024-07-14 14:09:43.476716] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.678 [2024-07-14 14:09:43.486209] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.678 [2024-07-14 14:09:43.486641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.678 [2024-07-14 14:09:43.486694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.678 [2024-07-14 14:09:43.486711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.678 [2024-07-14 14:09:43.486960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.678 [2024-07-14 14:09:43.487204] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.678 [2024-07-14 14:09:43.487228] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.678 [2024-07-14 14:09:43.487243] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.678 [2024-07-14 14:09:43.490810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.678 [2024-07-14 14:09:43.500102] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.678 [2024-07-14 14:09:43.500578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.678 [2024-07-14 14:09:43.500630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.678 [2024-07-14 14:09:43.500648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.678 [2024-07-14 14:09:43.500894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.678 [2024-07-14 14:09:43.501138] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.678 [2024-07-14 14:09:43.501162] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.678 [2024-07-14 14:09:43.501177] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.678 [2024-07-14 14:09:43.504751] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.678 [2024-07-14 14:09:43.514029] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.678 [2024-07-14 14:09:43.514450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.678 [2024-07-14 14:09:43.514504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.678 [2024-07-14 14:09:43.514521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.678 [2024-07-14 14:09:43.514758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.678 [2024-07-14 14:09:43.515013] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.678 [2024-07-14 14:09:43.515038] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.678 [2024-07-14 14:09:43.515053] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.678 [2024-07-14 14:09:43.518620] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.678 [2024-07-14 14:09:43.527899] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.678 [2024-07-14 14:09:43.528378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.678 [2024-07-14 14:09:43.528427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.678 [2024-07-14 14:09:43.528445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.678 [2024-07-14 14:09:43.528689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.679 [2024-07-14 14:09:43.528943] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.679 [2024-07-14 14:09:43.528968] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.679 [2024-07-14 14:09:43.528984] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.679 [2024-07-14 14:09:43.532553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.679 [2024-07-14 14:09:43.541819] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.679 [2024-07-14 14:09:43.542247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.679 [2024-07-14 14:09:43.542297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.679 [2024-07-14 14:09:43.542315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.679 [2024-07-14 14:09:43.542552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.679 [2024-07-14 14:09:43.542795] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.679 [2024-07-14 14:09:43.542818] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.679 [2024-07-14 14:09:43.542833] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.679 [2024-07-14 14:09:43.546412] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.679 [2024-07-14 14:09:43.555684] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.679 [2024-07-14 14:09:43.556088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.679 [2024-07-14 14:09:43.556119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.679 [2024-07-14 14:09:43.556137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.679 [2024-07-14 14:09:43.556374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.679 [2024-07-14 14:09:43.556617] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.679 [2024-07-14 14:09:43.556640] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.679 [2024-07-14 14:09:43.556655] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.679 [2024-07-14 14:09:43.560236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.679 [2024-07-14 14:09:43.569714] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.679 [2024-07-14 14:09:43.570149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.679 [2024-07-14 14:09:43.570199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.679 [2024-07-14 14:09:43.570216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.679 [2024-07-14 14:09:43.570453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.679 [2024-07-14 14:09:43.570696] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.679 [2024-07-14 14:09:43.570720] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.679 [2024-07-14 14:09:43.570741] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.679 [2024-07-14 14:09:43.574322] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.679 [2024-07-14 14:09:43.583597] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.679 [2024-07-14 14:09:43.583979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.679 [2024-07-14 14:09:43.584010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.679 [2024-07-14 14:09:43.584027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.679 [2024-07-14 14:09:43.584264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.679 [2024-07-14 14:09:43.584507] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.679 [2024-07-14 14:09:43.584531] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.679 [2024-07-14 14:09:43.584546] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.679 [2024-07-14 14:09:43.588128] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.679 [2024-07-14 14:09:43.597624] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.679 [2024-07-14 14:09:43.598068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.679 [2024-07-14 14:09:43.598098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.679 [2024-07-14 14:09:43.598115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.679 [2024-07-14 14:09:43.598353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.679 [2024-07-14 14:09:43.598595] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.679 [2024-07-14 14:09:43.598619] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.679 [2024-07-14 14:09:43.598634] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.679 [2024-07-14 14:09:43.602224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.679 [2024-07-14 14:09:43.611504] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.679 [2024-07-14 14:09:43.611964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.679 [2024-07-14 14:09:43.611995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.679 [2024-07-14 14:09:43.612013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.679 [2024-07-14 14:09:43.612250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.679 [2024-07-14 14:09:43.612493] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.679 [2024-07-14 14:09:43.612517] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.679 [2024-07-14 14:09:43.612532] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.679 [2024-07-14 14:09:43.616114] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.679 [2024-07-14 14:09:43.625389] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.679 [2024-07-14 14:09:43.625781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.679 [2024-07-14 14:09:43.625818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.679 [2024-07-14 14:09:43.625836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.679 [2024-07-14 14:09:43.626085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.679 [2024-07-14 14:09:43.626329] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.679 [2024-07-14 14:09:43.626353] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.679 [2024-07-14 14:09:43.626368] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.679 [2024-07-14 14:09:43.629945] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.941 [2024-07-14 14:09:43.639231] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.941 [2024-07-14 14:09:43.639701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.941 [2024-07-14 14:09:43.639732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.941 [2024-07-14 14:09:43.639750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.941 [2024-07-14 14:09:43.640010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.941 [2024-07-14 14:09:43.640254] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.941 [2024-07-14 14:09:43.640279] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.941 [2024-07-14 14:09:43.640294] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.941 [2024-07-14 14:09:43.643864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.941 [2024-07-14 14:09:43.653140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.941 [2024-07-14 14:09:43.653509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.941 [2024-07-14 14:09:43.653540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.941 [2024-07-14 14:09:43.653557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.941 [2024-07-14 14:09:43.653794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.941 [2024-07-14 14:09:43.654047] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.941 [2024-07-14 14:09:43.654072] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.941 [2024-07-14 14:09:43.654087] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.941 [2024-07-14 14:09:43.657655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.941 [2024-07-14 14:09:43.667151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.941 [2024-07-14 14:09:43.667520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.941 [2024-07-14 14:09:43.667551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.941 [2024-07-14 14:09:43.667569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.941 [2024-07-14 14:09:43.667805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.941 [2024-07-14 14:09:43.668065] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.941 [2024-07-14 14:09:43.668090] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.941 [2024-07-14 14:09:43.668105] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.941 [2024-07-14 14:09:43.671674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.941 [2024-07-14 14:09:43.681026] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.941 [2024-07-14 14:09:43.681419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.941 [2024-07-14 14:09:43.681450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.941 [2024-07-14 14:09:43.681467] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.941 [2024-07-14 14:09:43.681704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.941 [2024-07-14 14:09:43.681959] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.941 [2024-07-14 14:09:43.681984] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.941 [2024-07-14 14:09:43.681999] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.941 [2024-07-14 14:09:43.685571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.941 [2024-07-14 14:09:43.695070] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.941 [2024-07-14 14:09:43.695444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.941 [2024-07-14 14:09:43.695475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.941 [2024-07-14 14:09:43.695493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.941 [2024-07-14 14:09:43.695730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.941 [2024-07-14 14:09:43.695985] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.941 [2024-07-14 14:09:43.696009] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.941 [2024-07-14 14:09:43.696024] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.941 [2024-07-14 14:09:43.699590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.941 [2024-07-14 14:09:43.709083] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.941 [2024-07-14 14:09:43.709446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.941 [2024-07-14 14:09:43.709476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.941 [2024-07-14 14:09:43.709493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.941 [2024-07-14 14:09:43.709731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.941 [2024-07-14 14:09:43.709986] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.941 [2024-07-14 14:09:43.710011] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.941 [2024-07-14 14:09:43.710026] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.941 [2024-07-14 14:09:43.713604] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.941 [2024-07-14 14:09:43.723113] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.941 [2024-07-14 14:09:43.723511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.941 [2024-07-14 14:09:43.723542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.941 [2024-07-14 14:09:43.723559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.941 [2024-07-14 14:09:43.723796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.941 [2024-07-14 14:09:43.724052] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.941 [2024-07-14 14:09:43.724077] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.941 [2024-07-14 14:09:43.724093] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.941 [2024-07-14 14:09:43.727670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.941 [2024-07-14 14:09:43.736967] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.941 [2024-07-14 14:09:43.737355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.941 [2024-07-14 14:09:43.737385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.941 [2024-07-14 14:09:43.737403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.941 [2024-07-14 14:09:43.737640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.941 [2024-07-14 14:09:43.737897] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.941 [2024-07-14 14:09:43.737922] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.941 [2024-07-14 14:09:43.737937] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.941 [2024-07-14 14:09:43.741517] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.941 [2024-07-14 14:09:43.750823] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.941 [2024-07-14 14:09:43.751202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.941 [2024-07-14 14:09:43.751234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.941 [2024-07-14 14:09:43.751252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.941 [2024-07-14 14:09:43.751489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.941 [2024-07-14 14:09:43.751733] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.941 [2024-07-14 14:09:43.751757] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.941 [2024-07-14 14:09:43.751772] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.941 [2024-07-14 14:09:43.755358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.941 [2024-07-14 14:09:43.764856] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.941 [2024-07-14 14:09:43.765242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.942 [2024-07-14 14:09:43.765273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.942 [2024-07-14 14:09:43.765299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.942 [2024-07-14 14:09:43.765538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.942 [2024-07-14 14:09:43.765781] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.942 [2024-07-14 14:09:43.765805] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.942 [2024-07-14 14:09:43.765821] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.942 [2024-07-14 14:09:43.769406] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.942 [2024-07-14 14:09:43.778912] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.942 [2024-07-14 14:09:43.779300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.942 [2024-07-14 14:09:43.779331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.942 [2024-07-14 14:09:43.779348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.942 [2024-07-14 14:09:43.779586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.942 [2024-07-14 14:09:43.779829] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.942 [2024-07-14 14:09:43.779853] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.942 [2024-07-14 14:09:43.779869] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.942 [2024-07-14 14:09:43.783458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.942 [2024-07-14 14:09:43.792759] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.942 [2024-07-14 14:09:43.793132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.942 [2024-07-14 14:09:43.793162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.942 [2024-07-14 14:09:43.793179] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.942 [2024-07-14 14:09:43.793417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.942 [2024-07-14 14:09:43.793660] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.942 [2024-07-14 14:09:43.793684] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.942 [2024-07-14 14:09:43.793699] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.942 [2024-07-14 14:09:43.797290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.942 [2024-07-14 14:09:43.806793] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.942 [2024-07-14 14:09:43.807200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.942 [2024-07-14 14:09:43.807231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.942 [2024-07-14 14:09:43.807249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.942 [2024-07-14 14:09:43.807486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.942 [2024-07-14 14:09:43.807730] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.942 [2024-07-14 14:09:43.807759] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.942 [2024-07-14 14:09:43.807775] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.942 [2024-07-14 14:09:43.811359] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.942 [2024-07-14 14:09:43.820657] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.942 [2024-07-14 14:09:43.821053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.942 [2024-07-14 14:09:43.821084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.942 [2024-07-14 14:09:43.821102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.942 [2024-07-14 14:09:43.821339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.942 [2024-07-14 14:09:43.821582] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.942 [2024-07-14 14:09:43.821605] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.942 [2024-07-14 14:09:43.821620] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.942 [2024-07-14 14:09:43.825234] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.942 [2024-07-14 14:09:43.834523] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.942 [2024-07-14 14:09:43.834898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.942 [2024-07-14 14:09:43.834933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.942 [2024-07-14 14:09:43.834951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.942 [2024-07-14 14:09:43.835190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.942 [2024-07-14 14:09:43.835434] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.942 [2024-07-14 14:09:43.835458] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.942 [2024-07-14 14:09:43.835473] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.942 [2024-07-14 14:09:43.839060] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.942 [2024-07-14 14:09:43.848568] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.942 [2024-07-14 14:09:43.848935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.942 [2024-07-14 14:09:43.848967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.942 [2024-07-14 14:09:43.848984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.942 [2024-07-14 14:09:43.849222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.942 [2024-07-14 14:09:43.849466] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.942 [2024-07-14 14:09:43.849490] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.942 [2024-07-14 14:09:43.849505] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.942 [2024-07-14 14:09:43.853098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.942 [2024-07-14 14:09:43.862610] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.942 [2024-07-14 14:09:43.863019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.942 [2024-07-14 14:09:43.863050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.942 [2024-07-14 14:09:43.863067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.942 [2024-07-14 14:09:43.863305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.942 [2024-07-14 14:09:43.863548] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.942 [2024-07-14 14:09:43.863572] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.942 [2024-07-14 14:09:43.863587] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.942 [2024-07-14 14:09:43.867172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.942 [2024-07-14 14:09:43.876471] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.942 [2024-07-14 14:09:43.876859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.942 [2024-07-14 14:09:43.876898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.942 [2024-07-14 14:09:43.876917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.942 [2024-07-14 14:09:43.877154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.942 [2024-07-14 14:09:43.877397] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.942 [2024-07-14 14:09:43.877421] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.942 [2024-07-14 14:09:43.877437] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.942 [2024-07-14 14:09:43.881018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.942 [2024-07-14 14:09:43.890324] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.942 [2024-07-14 14:09:43.890714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.942 [2024-07-14 14:09:43.890745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.942 [2024-07-14 14:09:43.890763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.942 [2024-07-14 14:09:43.891013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.942 [2024-07-14 14:09:43.891257] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.942 [2024-07-14 14:09:43.891281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.942 [2024-07-14 14:09:43.891296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.942 [2024-07-14 14:09:43.894868] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.942 [2024-07-14 14:09:43.904167] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.942 [2024-07-14 14:09:43.904534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.942 [2024-07-14 14:09:43.904564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.942 [2024-07-14 14:09:43.904582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.942 [2024-07-14 14:09:43.904827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.942 [2024-07-14 14:09:43.905081] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.942 [2024-07-14 14:09:43.905106] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.942 [2024-07-14 14:09:43.905122] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.942 [2024-07-14 14:09:43.908688] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.942 [2024-07-14 14:09:43.918189] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.942 [2024-07-14 14:09:43.918554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.943 [2024-07-14 14:09:43.918585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:05.943 [2024-07-14 14:09:43.918603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:05.943 [2024-07-14 14:09:43.918841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:05.943 [2024-07-14 14:09:43.919094] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.943 [2024-07-14 14:09:43.919119] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.943 [2024-07-14 14:09:43.919134] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.203 [2024-07-14 14:09:43.922706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.203 [2024-07-14 14:09:43.932192] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.203 [2024-07-14 14:09:43.932557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.203 [2024-07-14 14:09:43.932588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.203 [2024-07-14 14:09:43.932606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.203 [2024-07-14 14:09:43.932843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.203 [2024-07-14 14:09:43.933095] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.203 [2024-07-14 14:09:43.933120] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.203 [2024-07-14 14:09:43.933135] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.203 [2024-07-14 14:09:43.936704] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.203 [2024-07-14 14:09:43.946196] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.203 [2024-07-14 14:09:43.946593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.203 [2024-07-14 14:09:43.946623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.203 [2024-07-14 14:09:43.946640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.203 [2024-07-14 14:09:43.946889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.203 [2024-07-14 14:09:43.947133] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.203 [2024-07-14 14:09:43.947157] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.203 [2024-07-14 14:09:43.947178] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.203 [2024-07-14 14:09:43.950748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.203 [2024-07-14 14:09:43.960034] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.203 [2024-07-14 14:09:43.960425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.203 [2024-07-14 14:09:43.960455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.203 [2024-07-14 14:09:43.960472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.203 [2024-07-14 14:09:43.960710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.203 [2024-07-14 14:09:43.960966] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.203 [2024-07-14 14:09:43.960991] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.203 [2024-07-14 14:09:43.961006] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.203 [2024-07-14 14:09:43.964574] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.203 [2024-07-14 14:09:43.974063] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.203 [2024-07-14 14:09:43.974451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.203 [2024-07-14 14:09:43.974481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.203 [2024-07-14 14:09:43.974498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.203 [2024-07-14 14:09:43.974736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.203 [2024-07-14 14:09:43.974991] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.203 [2024-07-14 14:09:43.975016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.203 [2024-07-14 14:09:43.975031] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.203 [2024-07-14 14:09:43.978600] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.203 [2024-07-14 14:09:43.988089] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.203 [2024-07-14 14:09:43.988487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.203 [2024-07-14 14:09:43.988518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.203 [2024-07-14 14:09:43.988535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.203 [2024-07-14 14:09:43.988772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.203 [2024-07-14 14:09:43.989027] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.203 [2024-07-14 14:09:43.989051] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.203 [2024-07-14 14:09:43.989066] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.203 [2024-07-14 14:09:43.992637] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.203 [2024-07-14 14:09:44.002142] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.203 [2024-07-14 14:09:44.002536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.203 [2024-07-14 14:09:44.002566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.203 [2024-07-14 14:09:44.002584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.203 [2024-07-14 14:09:44.002821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.203 [2024-07-14 14:09:44.003075] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.203 [2024-07-14 14:09:44.003099] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.203 [2024-07-14 14:09:44.003114] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.203 [2024-07-14 14:09:44.006691] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.203 [2024-07-14 14:09:44.016196] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.203 [2024-07-14 14:09:44.016557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.203 [2024-07-14 14:09:44.016588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.203 [2024-07-14 14:09:44.016606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.204 [2024-07-14 14:09:44.016843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.204 [2024-07-14 14:09:44.017113] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.204 [2024-07-14 14:09:44.017138] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.204 [2024-07-14 14:09:44.017153] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.204 [2024-07-14 14:09:44.020723] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.204 [2024-07-14 14:09:44.030232] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.204 [2024-07-14 14:09:44.030620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.204 [2024-07-14 14:09:44.030651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.204 [2024-07-14 14:09:44.030669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.204 [2024-07-14 14:09:44.030917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.204 [2024-07-14 14:09:44.031162] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.204 [2024-07-14 14:09:44.031187] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.204 [2024-07-14 14:09:44.031202] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.204 [2024-07-14 14:09:44.034774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.204 [2024-07-14 14:09:44.044082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.204 [2024-07-14 14:09:44.044443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.204 [2024-07-14 14:09:44.044473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.204 [2024-07-14 14:09:44.044491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.204 [2024-07-14 14:09:44.044733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.204 [2024-07-14 14:09:44.044989] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.204 [2024-07-14 14:09:44.045014] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.204 [2024-07-14 14:09:44.045029] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.204 [2024-07-14 14:09:44.048604] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.204 [2024-07-14 14:09:44.058110] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.204 [2024-07-14 14:09:44.058511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.204 [2024-07-14 14:09:44.058542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.204 [2024-07-14 14:09:44.058559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.204 [2024-07-14 14:09:44.058796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.204 [2024-07-14 14:09:44.059050] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.204 [2024-07-14 14:09:44.059074] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.204 [2024-07-14 14:09:44.059089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.204 [2024-07-14 14:09:44.062656] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.204 [2024-07-14 14:09:44.072141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.204 [2024-07-14 14:09:44.072528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.204 [2024-07-14 14:09:44.072559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.204 [2024-07-14 14:09:44.072576] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.204 [2024-07-14 14:09:44.072813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.204 [2024-07-14 14:09:44.073066] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.204 [2024-07-14 14:09:44.073091] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.204 [2024-07-14 14:09:44.073106] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.204 [2024-07-14 14:09:44.076676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.204 [2024-07-14 14:09:44.086140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.204 [2024-07-14 14:09:44.086561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.204 [2024-07-14 14:09:44.086609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.204 [2024-07-14 14:09:44.086626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.204 [2024-07-14 14:09:44.086864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.204 [2024-07-14 14:09:44.087118] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.204 [2024-07-14 14:09:44.087142] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.204 [2024-07-14 14:09:44.087164] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.204 [2024-07-14 14:09:44.090734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.204 [2024-07-14 14:09:44.100044] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.204 [2024-07-14 14:09:44.100493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.204 [2024-07-14 14:09:44.100524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.204 [2024-07-14 14:09:44.100541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.204 [2024-07-14 14:09:44.100779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.204 [2024-07-14 14:09:44.101037] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.204 [2024-07-14 14:09:44.101062] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.204 [2024-07-14 14:09:44.101078] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.204 [2024-07-14 14:09:44.104653] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.204 [2024-07-14 14:09:44.113939] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.204 [2024-07-14 14:09:44.114378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.204 [2024-07-14 14:09:44.114429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.204 [2024-07-14 14:09:44.114446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.204 [2024-07-14 14:09:44.114683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.204 [2024-07-14 14:09:44.114937] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.204 [2024-07-14 14:09:44.114962] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.204 [2024-07-14 14:09:44.114977] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.204 [2024-07-14 14:09:44.118545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.204 [2024-07-14 14:09:44.127825] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.204 [2024-07-14 14:09:44.128200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.204 [2024-07-14 14:09:44.128231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.204 [2024-07-14 14:09:44.128248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.204 [2024-07-14 14:09:44.128485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.204 [2024-07-14 14:09:44.128729] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.204 [2024-07-14 14:09:44.128752] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.204 [2024-07-14 14:09:44.128768] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.204 [2024-07-14 14:09:44.132350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.204 [2024-07-14 14:09:44.141834] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.204 [2024-07-14 14:09:44.142208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.204 [2024-07-14 14:09:44.142244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.204 [2024-07-14 14:09:44.142262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.204 [2024-07-14 14:09:44.142500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.204 [2024-07-14 14:09:44.142742] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.204 [2024-07-14 14:09:44.142766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.204 [2024-07-14 14:09:44.142781] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.204 [2024-07-14 14:09:44.146364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.204 [2024-07-14 14:09:44.155843] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.204 [2024-07-14 14:09:44.156218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.204 [2024-07-14 14:09:44.156249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.204 [2024-07-14 14:09:44.156266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.204 [2024-07-14 14:09:44.156503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.204 [2024-07-14 14:09:44.156746] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.204 [2024-07-14 14:09:44.156769] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.204 [2024-07-14 14:09:44.156785] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.204 [2024-07-14 14:09:44.160367] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.204 [2024-07-14 14:09:44.169855] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.204 [2024-07-14 14:09:44.170227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.205 [2024-07-14 14:09:44.170259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.205 [2024-07-14 14:09:44.170276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.205 [2024-07-14 14:09:44.170514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.205 [2024-07-14 14:09:44.170757] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.205 [2024-07-14 14:09:44.170781] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.205 [2024-07-14 14:09:44.170796] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.205 [2024-07-14 14:09:44.174377] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.205 [2024-07-14 14:09:44.183871] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.464 [2024-07-14 14:09:44.184277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.464 [2024-07-14 14:09:44.184309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.464 [2024-07-14 14:09:44.184328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.464 [2024-07-14 14:09:44.184567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.464 [2024-07-14 14:09:44.184820] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.464 [2024-07-14 14:09:44.184844] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.464 [2024-07-14 14:09:44.184859] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.464 [2024-07-14 14:09:44.188440] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.464 [2024-07-14 14:09:44.197725] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.464 [2024-07-14 14:09:44.198131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.464 [2024-07-14 14:09:44.198163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.464 [2024-07-14 14:09:44.198181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.464 [2024-07-14 14:09:44.198418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.464 [2024-07-14 14:09:44.198661] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.464 [2024-07-14 14:09:44.198685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.464 [2024-07-14 14:09:44.198700] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.464 [2024-07-14 14:09:44.202281] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.464 [2024-07-14 14:09:44.211762] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.464 [2024-07-14 14:09:44.212105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.464 [2024-07-14 14:09:44.212136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.464 [2024-07-14 14:09:44.212153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.464 [2024-07-14 14:09:44.212391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.464 [2024-07-14 14:09:44.212633] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.464 [2024-07-14 14:09:44.212657] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.464 [2024-07-14 14:09:44.212673] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.464 [2024-07-14 14:09:44.216253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.464 [2024-07-14 14:09:44.225734] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.464 [2024-07-14 14:09:44.226109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.464 [2024-07-14 14:09:44.226139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.464 [2024-07-14 14:09:44.226157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.464 [2024-07-14 14:09:44.226394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.464 [2024-07-14 14:09:44.226637] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.464 [2024-07-14 14:09:44.226660] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.464 [2024-07-14 14:09:44.226675] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.464 [2024-07-14 14:09:44.230264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.464 [2024-07-14 14:09:44.239748] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.464 [2024-07-14 14:09:44.240151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.464 [2024-07-14 14:09:44.240182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.464 [2024-07-14 14:09:44.240199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.464 [2024-07-14 14:09:44.240436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.464 [2024-07-14 14:09:44.240679] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.464 [2024-07-14 14:09:44.240703] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.464 [2024-07-14 14:09:44.240719] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.464 [2024-07-14 14:09:44.244298] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.464 [2024-07-14 14:09:44.253780] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.464 [2024-07-14 14:09:44.254174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.464 [2024-07-14 14:09:44.254205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.464 [2024-07-14 14:09:44.254222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.464 [2024-07-14 14:09:44.254460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.464 [2024-07-14 14:09:44.254703] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.464 [2024-07-14 14:09:44.254726] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.464 [2024-07-14 14:09:44.254741] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.464 [2024-07-14 14:09:44.258324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.464 [2024-07-14 14:09:44.267808] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.464 [2024-07-14 14:09:44.268184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.464 [2024-07-14 14:09:44.268215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.464 [2024-07-14 14:09:44.268233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.464 [2024-07-14 14:09:44.268470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.464 [2024-07-14 14:09:44.268713] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.465 [2024-07-14 14:09:44.268737] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.465 [2024-07-14 14:09:44.268752] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.465 [2024-07-14 14:09:44.272360] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.465 [2024-07-14 14:09:44.281646] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.465 [2024-07-14 14:09:44.282047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.465 [2024-07-14 14:09:44.282078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.465 [2024-07-14 14:09:44.282101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.465 [2024-07-14 14:09:44.282340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.465 [2024-07-14 14:09:44.282583] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.465 [2024-07-14 14:09:44.282607] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.465 [2024-07-14 14:09:44.282622] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.465 [2024-07-14 14:09:44.286206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.465 [2024-07-14 14:09:44.295485] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.465 [2024-07-14 14:09:44.295870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.465 [2024-07-14 14:09:44.295908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.465 [2024-07-14 14:09:44.295926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.465 [2024-07-14 14:09:44.296163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.465 [2024-07-14 14:09:44.296406] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.465 [2024-07-14 14:09:44.296430] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.465 [2024-07-14 14:09:44.296445] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.465 [2024-07-14 14:09:44.300025] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.465 [2024-07-14 14:09:44.309521] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.465 [2024-07-14 14:09:44.309890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.465 [2024-07-14 14:09:44.309921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.465 [2024-07-14 14:09:44.309938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.465 [2024-07-14 14:09:44.310176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.465 [2024-07-14 14:09:44.310418] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.465 [2024-07-14 14:09:44.310442] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.465 [2024-07-14 14:09:44.310458] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.465 [2024-07-14 14:09:44.314034] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.465 [2024-07-14 14:09:44.323509] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.465 [2024-07-14 14:09:44.323899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.465 [2024-07-14 14:09:44.323931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.465 [2024-07-14 14:09:44.323949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.465 [2024-07-14 14:09:44.324186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.465 [2024-07-14 14:09:44.324429] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.465 [2024-07-14 14:09:44.324459] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.465 [2024-07-14 14:09:44.324475] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.465 [2024-07-14 14:09:44.328056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.465 [2024-07-14 14:09:44.337543] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.465 [2024-07-14 14:09:44.337952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.465 [2024-07-14 14:09:44.337983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.465 [2024-07-14 14:09:44.338000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.465 [2024-07-14 14:09:44.338238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.465 [2024-07-14 14:09:44.338481] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.465 [2024-07-14 14:09:44.338505] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.465 [2024-07-14 14:09:44.338520] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.465 [2024-07-14 14:09:44.342106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.465 [2024-07-14 14:09:44.351382] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.465 [2024-07-14 14:09:44.351769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.465 [2024-07-14 14:09:44.351800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.465 [2024-07-14 14:09:44.351817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.465 [2024-07-14 14:09:44.352067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.465 [2024-07-14 14:09:44.352310] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.465 [2024-07-14 14:09:44.352334] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.465 [2024-07-14 14:09:44.352349] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.465 [2024-07-14 14:09:44.355930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.465 [2024-07-14 14:09:44.365411] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.465 [2024-07-14 14:09:44.365910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.465 [2024-07-14 14:09:44.365941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.465 [2024-07-14 14:09:44.365958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.465 [2024-07-14 14:09:44.366196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.465 [2024-07-14 14:09:44.366439] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.465 [2024-07-14 14:09:44.366463] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.465 [2024-07-14 14:09:44.366478] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.465 [2024-07-14 14:09:44.370058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.465 [2024-07-14 14:09:44.379337] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.465 [2024-07-14 14:09:44.379707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.465 [2024-07-14 14:09:44.379738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.465 [2024-07-14 14:09:44.379755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.465 [2024-07-14 14:09:44.380004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.465 [2024-07-14 14:09:44.380249] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.465 [2024-07-14 14:09:44.380273] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.465 [2024-07-14 14:09:44.380288] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.465 [2024-07-14 14:09:44.383856] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.465 [2024-07-14 14:09:44.393349] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.465 [2024-07-14 14:09:44.393739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.465 [2024-07-14 14:09:44.393770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.465 [2024-07-14 14:09:44.393787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.465 [2024-07-14 14:09:44.394036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.465 [2024-07-14 14:09:44.394279] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.465 [2024-07-14 14:09:44.394304] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.465 [2024-07-14 14:09:44.394318] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.465 [2024-07-14 14:09:44.397898] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.465 [2024-07-14 14:09:44.407379] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.465 [2024-07-14 14:09:44.407737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.465 [2024-07-14 14:09:44.407769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.465 [2024-07-14 14:09:44.407786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.465 [2024-07-14 14:09:44.408038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.465 [2024-07-14 14:09:44.408282] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.465 [2024-07-14 14:09:44.408305] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.465 [2024-07-14 14:09:44.408321] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.465 [2024-07-14 14:09:44.411899] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.465 [2024-07-14 14:09:44.421388] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.465 [2024-07-14 14:09:44.421780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.465 [2024-07-14 14:09:44.421811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.465 [2024-07-14 14:09:44.421829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.465 [2024-07-14 14:09:44.422080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.466 [2024-07-14 14:09:44.422325] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.466 [2024-07-14 14:09:44.422348] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.466 [2024-07-14 14:09:44.422363] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.466 [2024-07-14 14:09:44.425942] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.466 [2024-07-14 14:09:44.435429] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.466 [2024-07-14 14:09:44.435825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.466 [2024-07-14 14:09:44.435856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.466 [2024-07-14 14:09:44.435874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.466 [2024-07-14 14:09:44.436122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.466 [2024-07-14 14:09:44.436365] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.466 [2024-07-14 14:09:44.436389] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.466 [2024-07-14 14:09:44.436404] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.466 [2024-07-14 14:09:44.439987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.726 [2024-07-14 14:09:44.449272] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.726 [2024-07-14 14:09:44.449691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.726 [2024-07-14 14:09:44.449722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.726 [2024-07-14 14:09:44.449740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.726 [2024-07-14 14:09:44.449993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.726 [2024-07-14 14:09:44.450237] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.726 [2024-07-14 14:09:44.450262] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.726 [2024-07-14 14:09:44.450277] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.726 [2024-07-14 14:09:44.453850] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.726 [2024-07-14 14:09:44.463145] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.726 [2024-07-14 14:09:44.463512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.726 [2024-07-14 14:09:44.463543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.726 [2024-07-14 14:09:44.463561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.726 [2024-07-14 14:09:44.463798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.726 [2024-07-14 14:09:44.464051] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.726 [2024-07-14 14:09:44.464076] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.726 [2024-07-14 14:09:44.464098] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.726 [2024-07-14 14:09:44.467670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.727 [2024-07-14 14:09:44.477168] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.727 [2024-07-14 14:09:44.477631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.727 [2024-07-14 14:09:44.477702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.727 [2024-07-14 14:09:44.477720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.727 [2024-07-14 14:09:44.477968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.727 [2024-07-14 14:09:44.478211] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.727 [2024-07-14 14:09:44.478235] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.727 [2024-07-14 14:09:44.478251] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.727 [2024-07-14 14:09:44.481837] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.727 [2024-07-14 14:09:44.491128] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.727 [2024-07-14 14:09:44.491497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.727 [2024-07-14 14:09:44.491548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.727 [2024-07-14 14:09:44.491566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.727 [2024-07-14 14:09:44.491804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.727 [2024-07-14 14:09:44.492056] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.727 [2024-07-14 14:09:44.492081] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.727 [2024-07-14 14:09:44.492096] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.727 [2024-07-14 14:09:44.495683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.727 [2024-07-14 14:09:44.504974] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.727 [2024-07-14 14:09:44.505365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.727 [2024-07-14 14:09:44.505397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.727 [2024-07-14 14:09:44.505414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.727 [2024-07-14 14:09:44.505652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.727 [2024-07-14 14:09:44.505906] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.727 [2024-07-14 14:09:44.505939] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.727 [2024-07-14 14:09:44.505954] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.727 [2024-07-14 14:09:44.509528] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.727 [2024-07-14 14:09:44.518803] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.727 [2024-07-14 14:09:44.519217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.727 [2024-07-14 14:09:44.519265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.727 [2024-07-14 14:09:44.519282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.727 [2024-07-14 14:09:44.519520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.727 [2024-07-14 14:09:44.519763] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.727 [2024-07-14 14:09:44.519786] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.727 [2024-07-14 14:09:44.519802] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.727 [2024-07-14 14:09:44.523380] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.727 [2024-07-14 14:09:44.532655] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.727 [2024-07-14 14:09:44.533060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.727 [2024-07-14 14:09:44.533091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.727 [2024-07-14 14:09:44.533108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.727 [2024-07-14 14:09:44.533346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.727 [2024-07-14 14:09:44.533589] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.727 [2024-07-14 14:09:44.533613] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.727 [2024-07-14 14:09:44.533629] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.727 [2024-07-14 14:09:44.537208] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.727 [2024-07-14 14:09:44.546694] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.727 [2024-07-14 14:09:44.547070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.727 [2024-07-14 14:09:44.547102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.727 [2024-07-14 14:09:44.547119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.727 [2024-07-14 14:09:44.547356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.727 [2024-07-14 14:09:44.547599] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.727 [2024-07-14 14:09:44.547623] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.727 [2024-07-14 14:09:44.547639] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.727 [2024-07-14 14:09:44.551219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.727 [2024-07-14 14:09:44.560709] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.727 [2024-07-14 14:09:44.561094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.727 [2024-07-14 14:09:44.561125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.727 [2024-07-14 14:09:44.561142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.727 [2024-07-14 14:09:44.561379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.727 [2024-07-14 14:09:44.561628] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.727 [2024-07-14 14:09:44.561653] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.727 [2024-07-14 14:09:44.561668] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.727 [2024-07-14 14:09:44.565251] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.727 [2024-07-14 14:09:44.574751] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.727 [2024-07-14 14:09:44.575151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.727 [2024-07-14 14:09:44.575183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.727 [2024-07-14 14:09:44.575200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.727 [2024-07-14 14:09:44.575438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.727 [2024-07-14 14:09:44.575681] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.727 [2024-07-14 14:09:44.575705] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.727 [2024-07-14 14:09:44.575720] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.727 [2024-07-14 14:09:44.579299] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.727 [2024-07-14 14:09:44.588783] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.727 [2024-07-14 14:09:44.589182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.727 [2024-07-14 14:09:44.589213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.727 [2024-07-14 14:09:44.589230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.727 [2024-07-14 14:09:44.589467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.727 [2024-07-14 14:09:44.589711] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.727 [2024-07-14 14:09:44.589735] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.727 [2024-07-14 14:09:44.589750] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.727 [2024-07-14 14:09:44.593326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.727 [2024-07-14 14:09:44.602815] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.727 [2024-07-14 14:09:44.603184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.727 [2024-07-14 14:09:44.603214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.727 [2024-07-14 14:09:44.603231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.727 [2024-07-14 14:09:44.603468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.727 [2024-07-14 14:09:44.603711] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.727 [2024-07-14 14:09:44.603735] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.727 [2024-07-14 14:09:44.603750] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.727 [2024-07-14 14:09:44.607335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.727 [2024-07-14 14:09:44.616819] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.727 [2024-07-14 14:09:44.617198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.727 [2024-07-14 14:09:44.617229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.727 [2024-07-14 14:09:44.617247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.727 [2024-07-14 14:09:44.617484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.727 [2024-07-14 14:09:44.617727] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.727 [2024-07-14 14:09:44.617751] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.727 [2024-07-14 14:09:44.617766] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.727 [2024-07-14 14:09:44.621345] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.728 [2024-07-14 14:09:44.630825] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.728 [2024-07-14 14:09:44.631226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.728 [2024-07-14 14:09:44.631256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.728 [2024-07-14 14:09:44.631273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.728 [2024-07-14 14:09:44.631510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.728 [2024-07-14 14:09:44.631753] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.728 [2024-07-14 14:09:44.631777] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.728 [2024-07-14 14:09:44.631792] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.728 [2024-07-14 14:09:44.635370] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.728 [2024-07-14 14:09:44.644851] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.728 [2024-07-14 14:09:44.645219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.728 [2024-07-14 14:09:44.645249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.728 [2024-07-14 14:09:44.645266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.728 [2024-07-14 14:09:44.645503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.728 [2024-07-14 14:09:44.645746] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.728 [2024-07-14 14:09:44.645770] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.728 [2024-07-14 14:09:44.645785] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.728 [2024-07-14 14:09:44.649363] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.728 [2024-07-14 14:09:44.658842] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.728 [2024-07-14 14:09:44.659219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.728 [2024-07-14 14:09:44.659255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.728 [2024-07-14 14:09:44.659274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.728 [2024-07-14 14:09:44.659512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.728 [2024-07-14 14:09:44.659754] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.728 [2024-07-14 14:09:44.659778] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.728 [2024-07-14 14:09:44.659793] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.728 [2024-07-14 14:09:44.663369] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.728 [2024-07-14 14:09:44.672851] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.728 [2024-07-14 14:09:44.673204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.728 [2024-07-14 14:09:44.673235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.728 [2024-07-14 14:09:44.673252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.728 [2024-07-14 14:09:44.673489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.728 [2024-07-14 14:09:44.673732] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.728 [2024-07-14 14:09:44.673756] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.728 [2024-07-14 14:09:44.673771] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.728 [2024-07-14 14:09:44.677345] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.728 [2024-07-14 14:09:44.686825] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.728 [2024-07-14 14:09:44.687202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.728 [2024-07-14 14:09:44.687233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.728 [2024-07-14 14:09:44.687250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.728 [2024-07-14 14:09:44.687487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.728 [2024-07-14 14:09:44.687730] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.728 [2024-07-14 14:09:44.687754] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.728 [2024-07-14 14:09:44.687769] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.728 [2024-07-14 14:09:44.691529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.728 [2024-07-14 14:09:44.700803] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.728 [2024-07-14 14:09:44.701155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.728 [2024-07-14 14:09:44.701186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.728 [2024-07-14 14:09:44.701203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.728 [2024-07-14 14:09:44.701440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.728 [2024-07-14 14:09:44.701689] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.728 [2024-07-14 14:09:44.701714] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.728 [2024-07-14 14:09:44.701729] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.728 [2024-07-14 14:09:44.705309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.988 [2024-07-14 14:09:44.714785] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.988 [2024-07-14 14:09:44.715187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.988 [2024-07-14 14:09:44.715217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.988 [2024-07-14 14:09:44.715235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.988 [2024-07-14 14:09:44.715472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.988 [2024-07-14 14:09:44.715715] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.988 [2024-07-14 14:09:44.715738] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.988 [2024-07-14 14:09:44.715753] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.988 [2024-07-14 14:09:44.719331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.988 [2024-07-14 14:09:44.728624] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.988 [2024-07-14 14:09:44.728977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.988 [2024-07-14 14:09:44.729008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.988 [2024-07-14 14:09:44.729025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.988 [2024-07-14 14:09:44.729262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.988 [2024-07-14 14:09:44.729505] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.988 [2024-07-14 14:09:44.729529] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.988 [2024-07-14 14:09:44.729544] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.988 [2024-07-14 14:09:44.733122] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.988 [2024-07-14 14:09:44.742597] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.988 [2024-07-14 14:09:44.743004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.988 [2024-07-14 14:09:44.743035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.988 [2024-07-14 14:09:44.743052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.988 [2024-07-14 14:09:44.743289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.988 [2024-07-14 14:09:44.743532] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.988 [2024-07-14 14:09:44.743556] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.988 [2024-07-14 14:09:44.743571] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.988 [2024-07-14 14:09:44.747148] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.988 [2024-07-14 14:09:44.756637] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.988 [2024-07-14 14:09:44.757011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.988 [2024-07-14 14:09:44.757042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.988 [2024-07-14 14:09:44.757059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.988 [2024-07-14 14:09:44.757296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.988 [2024-07-14 14:09:44.757538] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.988 [2024-07-14 14:09:44.757563] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.988 [2024-07-14 14:09:44.757578] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.988 [2024-07-14 14:09:44.761158] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.988 [2024-07-14 14:09:44.770638] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.988 [2024-07-14 14:09:44.771036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.988 [2024-07-14 14:09:44.771067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.988 [2024-07-14 14:09:44.771084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.988 [2024-07-14 14:09:44.771320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.988 [2024-07-14 14:09:44.771563] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.988 [2024-07-14 14:09:44.771587] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.988 [2024-07-14 14:09:44.771602] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.988 [2024-07-14 14:09:44.775179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.988 [2024-07-14 14:09:44.784656] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.988 [2024-07-14 14:09:44.785028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.988 [2024-07-14 14:09:44.785059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.988 [2024-07-14 14:09:44.785076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.988 [2024-07-14 14:09:44.785314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.988 [2024-07-14 14:09:44.785557] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.988 [2024-07-14 14:09:44.785580] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.988 [2024-07-14 14:09:44.785596] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.988 [2024-07-14 14:09:44.789174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.988 [2024-07-14 14:09:44.798655] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.988 [2024-07-14 14:09:44.799058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.988 [2024-07-14 14:09:44.799089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.988 [2024-07-14 14:09:44.799111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.988 [2024-07-14 14:09:44.799351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.988 [2024-07-14 14:09:44.799594] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.988 [2024-07-14 14:09:44.799618] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.988 [2024-07-14 14:09:44.799633] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.988 [2024-07-14 14:09:44.803211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.988 [2024-07-14 14:09:44.812690] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.988 [2024-07-14 14:09:44.813041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.988 [2024-07-14 14:09:44.813072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.988 [2024-07-14 14:09:44.813090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.988 [2024-07-14 14:09:44.813327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.988 [2024-07-14 14:09:44.813570] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.988 [2024-07-14 14:09:44.813594] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.988 [2024-07-14 14:09:44.813609] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.988 [2024-07-14 14:09:44.817188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.988 [2024-07-14 14:09:44.826678] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.988 [2024-07-14 14:09:44.827053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.988 [2024-07-14 14:09:44.827083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.988 [2024-07-14 14:09:44.827100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.988 [2024-07-14 14:09:44.827338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.988 [2024-07-14 14:09:44.827581] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.988 [2024-07-14 14:09:44.827605] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.988 [2024-07-14 14:09:44.827620] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.988 [2024-07-14 14:09:44.831200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.988 [2024-07-14 14:09:44.840674] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.989 [2024-07-14 14:09:44.841049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.989 [2024-07-14 14:09:44.841081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.989 [2024-07-14 14:09:44.841098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.989 [2024-07-14 14:09:44.841336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.989 [2024-07-14 14:09:44.841579] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.989 [2024-07-14 14:09:44.841608] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.989 [2024-07-14 14:09:44.841624] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.989 [2024-07-14 14:09:44.845208] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.989 [2024-07-14 14:09:44.854512] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.989 [2024-07-14 14:09:44.854937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.989 [2024-07-14 14:09:44.854969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.989 [2024-07-14 14:09:44.854986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.989 [2024-07-14 14:09:44.855223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.989 [2024-07-14 14:09:44.855467] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.989 [2024-07-14 14:09:44.855491] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.989 [2024-07-14 14:09:44.855506] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.989 [2024-07-14 14:09:44.859089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.989 [2024-07-14 14:09:44.868364] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.989 [2024-07-14 14:09:44.868736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.989 [2024-07-14 14:09:44.868767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.989 [2024-07-14 14:09:44.868784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.989 [2024-07-14 14:09:44.869032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.989 [2024-07-14 14:09:44.869276] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.989 [2024-07-14 14:09:44.869300] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.989 [2024-07-14 14:09:44.869315] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.989 [2024-07-14 14:09:44.872895] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.989 [2024-07-14 14:09:44.882383] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.989 [2024-07-14 14:09:44.882745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.989 [2024-07-14 14:09:44.882775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.989 [2024-07-14 14:09:44.882792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.989 [2024-07-14 14:09:44.883040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.989 [2024-07-14 14:09:44.883284] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.989 [2024-07-14 14:09:44.883308] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.989 [2024-07-14 14:09:44.883324] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.989 [2024-07-14 14:09:44.886900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.989 [2024-07-14 14:09:44.896426] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.989 [2024-07-14 14:09:44.896828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.989 [2024-07-14 14:09:44.896858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.989 [2024-07-14 14:09:44.896884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.989 [2024-07-14 14:09:44.897125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.989 [2024-07-14 14:09:44.897368] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.989 [2024-07-14 14:09:44.897392] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.989 [2024-07-14 14:09:44.897407] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.989 [2024-07-14 14:09:44.900986] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.989 [2024-07-14 14:09:44.910271] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.989 [2024-07-14 14:09:44.910677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.989 [2024-07-14 14:09:44.910707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.989 [2024-07-14 14:09:44.910724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.989 [2024-07-14 14:09:44.910971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.989 [2024-07-14 14:09:44.911215] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.989 [2024-07-14 14:09:44.911239] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.989 [2024-07-14 14:09:44.911254] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.989 [2024-07-14 14:09:44.914825] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.989 [2024-07-14 14:09:44.924126] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.989 [2024-07-14 14:09:44.924516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.989 [2024-07-14 14:09:44.924547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.989 [2024-07-14 14:09:44.924564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.989 [2024-07-14 14:09:44.924802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.989 [2024-07-14 14:09:44.925055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.989 [2024-07-14 14:09:44.925080] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.989 [2024-07-14 14:09:44.925095] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.989 [2024-07-14 14:09:44.928672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.989 [2024-07-14 14:09:44.938174] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.989 [2024-07-14 14:09:44.938569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.989 [2024-07-14 14:09:44.938600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.989 [2024-07-14 14:09:44.938616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.989 [2024-07-14 14:09:44.938860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.989 [2024-07-14 14:09:44.939112] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.989 [2024-07-14 14:09:44.939137] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.989 [2024-07-14 14:09:44.939152] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.989 [2024-07-14 14:09:44.942724] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.989 [2024-07-14 14:09:44.952217] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.989 [2024-07-14 14:09:44.952607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.989 [2024-07-14 14:09:44.952638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.989 [2024-07-14 14:09:44.952655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.989 [2024-07-14 14:09:44.952902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.989 [2024-07-14 14:09:44.953146] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.989 [2024-07-14 14:09:44.953170] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.989 [2024-07-14 14:09:44.953185] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:06.989 [2024-07-14 14:09:44.956753] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:06.989 [2024-07-14 14:09:44.966251] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:06.989 [2024-07-14 14:09:44.966594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.989 [2024-07-14 14:09:44.966625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:06.989 [2024-07-14 14:09:44.966642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:06.989 [2024-07-14 14:09:44.966888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:06.989 [2024-07-14 14:09:44.967132] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:06.989 [2024-07-14 14:09:44.967156] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:06.989 [2024-07-14 14:09:44.967171] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.249 [2024-07-14 14:09:44.970755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.249 [2024-07-14 14:09:44.980269] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:44.980667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:44.980698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:44.980715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:44.980964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:44.981208] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.250 [2024-07-14 14:09:44.981232] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.250 [2024-07-14 14:09:44.981254] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.250 [2024-07-14 14:09:44.984401] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.250 [2024-07-14 14:09:44.993626] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:44.993984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:44.994013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:44.994028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:44.994256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:44.994472] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.250 [2024-07-14 14:09:44.994492] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.250 [2024-07-14 14:09:44.994504] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.250 [2024-07-14 14:09:44.997560] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.250 [2024-07-14 14:09:45.006892] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:45.007293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:45.007321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:45.007337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:45.007564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:45.007780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.250 [2024-07-14 14:09:45.007800] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.250 [2024-07-14 14:09:45.007812] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.250 [2024-07-14 14:09:45.010913] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.250 [2024-07-14 14:09:45.020240] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:45.020640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:45.020667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:45.020682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:45.020937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:45.021142] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.250 [2024-07-14 14:09:45.021163] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.250 [2024-07-14 14:09:45.021176] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.250 [2024-07-14 14:09:45.024206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.250 [2024-07-14 14:09:45.033543] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:45.033933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:45.033966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:45.033983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:45.034212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:45.034429] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.250 [2024-07-14 14:09:45.034449] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.250 [2024-07-14 14:09:45.034461] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.250 [2024-07-14 14:09:45.037466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.250 [2024-07-14 14:09:45.046870] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:45.047237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:45.047265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:45.047281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:45.047510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:45.047726] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.250 [2024-07-14 14:09:45.047746] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.250 [2024-07-14 14:09:45.047759] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.250 [2024-07-14 14:09:45.050776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.250 [2024-07-14 14:09:45.060137] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:45.060530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:45.060557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:45.060573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:45.060814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:45.061056] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.250 [2024-07-14 14:09:45.061077] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.250 [2024-07-14 14:09:45.061090] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.250 [2024-07-14 14:09:45.064116] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.250 [2024-07-14 14:09:45.073425] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:45.073805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:45.073833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:45.073848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:45.074085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:45.074327] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.250 [2024-07-14 14:09:45.074347] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.250 [2024-07-14 14:09:45.074360] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.250 [2024-07-14 14:09:45.077339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.250 [2024-07-14 14:09:45.086587] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:45.087025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:45.087053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:45.087068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:45.087310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:45.087509] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.250 [2024-07-14 14:09:45.087528] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.250 [2024-07-14 14:09:45.087540] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.250 [2024-07-14 14:09:45.090527] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.250 [2024-07-14 14:09:45.099895] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:45.100352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:45.100379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:45.100394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:45.100630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:45.100829] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.250 [2024-07-14 14:09:45.100849] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.250 [2024-07-14 14:09:45.100884] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.250 [2024-07-14 14:09:45.103912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.250 [2024-07-14 14:09:45.113220] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:45.113635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:45.113662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:45.113693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:45.113945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:45.114179] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.250 [2024-07-14 14:09:45.114201] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.250 [2024-07-14 14:09:45.114214] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.250 [2024-07-14 14:09:45.117253] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.250 [2024-07-14 14:09:45.126441] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.250 [2024-07-14 14:09:45.126817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.250 [2024-07-14 14:09:45.126844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.250 [2024-07-14 14:09:45.126860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.250 [2024-07-14 14:09:45.127096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.250 [2024-07-14 14:09:45.127316] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.251 [2024-07-14 14:09:45.127336] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.251 [2024-07-14 14:09:45.127349] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.251 [2024-07-14 14:09:45.130352] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.251 [2024-07-14 14:09:45.139633] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.251 [2024-07-14 14:09:45.139999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.251 [2024-07-14 14:09:45.140027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.251 [2024-07-14 14:09:45.140043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.251 [2024-07-14 14:09:45.140272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.251 [2024-07-14 14:09:45.140488] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.251 [2024-07-14 14:09:45.140508] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.251 [2024-07-14 14:09:45.140520] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.251 [2024-07-14 14:09:45.143562] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.251 [2024-07-14 14:09:45.152948] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.251 [2024-07-14 14:09:45.153369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.251 [2024-07-14 14:09:45.153398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.251 [2024-07-14 14:09:45.153413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.251 [2024-07-14 14:09:45.153646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.251 [2024-07-14 14:09:45.153885] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.251 [2024-07-14 14:09:45.153906] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.251 [2024-07-14 14:09:45.153933] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.251 [2024-07-14 14:09:45.156933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.251 [2024-07-14 14:09:45.166253] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.251 [2024-07-14 14:09:45.166623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.251 [2024-07-14 14:09:45.166652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.251 [2024-07-14 14:09:45.166673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.251 [2024-07-14 14:09:45.166898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.251 [2024-07-14 14:09:45.167117] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.251 [2024-07-14 14:09:45.167139] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.251 [2024-07-14 14:09:45.167153] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.251 [2024-07-14 14:09:45.170502] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.251 [2024-07-14 14:09:45.179630] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.251 [2024-07-14 14:09:45.179990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.251 [2024-07-14 14:09:45.180019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.251 [2024-07-14 14:09:45.180035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.251 [2024-07-14 14:09:45.180263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.251 [2024-07-14 14:09:45.180478] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.251 [2024-07-14 14:09:45.180498] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.251 [2024-07-14 14:09:45.180511] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.251 [2024-07-14 14:09:45.183584] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.251 [2024-07-14 14:09:45.192873] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.251 [2024-07-14 14:09:45.193252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.251 [2024-07-14 14:09:45.193279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.251 [2024-07-14 14:09:45.193309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.251 [2024-07-14 14:09:45.193564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.251 [2024-07-14 14:09:45.193764] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.251 [2024-07-14 14:09:45.193783] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.251 [2024-07-14 14:09:45.193796] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.251 [2024-07-14 14:09:45.196801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.251 [2024-07-14 14:09:45.206083] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.251 [2024-07-14 14:09:45.206542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.251 [2024-07-14 14:09:45.206569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.251 [2024-07-14 14:09:45.206585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.251 [2024-07-14 14:09:45.206827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.251 [2024-07-14 14:09:45.207074] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.251 [2024-07-14 14:09:45.207100] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.251 [2024-07-14 14:09:45.207114] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.251 [2024-07-14 14:09:45.210108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.251 [2024-07-14 14:09:45.219439] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.251 [2024-07-14 14:09:45.219844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.251 [2024-07-14 14:09:45.219872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.251 [2024-07-14 14:09:45.219898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.251 [2024-07-14 14:09:45.220112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.251 [2024-07-14 14:09:45.220345] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.251 [2024-07-14 14:09:45.220365] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.251 [2024-07-14 14:09:45.220378] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.251 [2024-07-14 14:09:45.223397] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.512 [2024-07-14 14:09:45.232881] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.512 [2024-07-14 14:09:45.233305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.512 [2024-07-14 14:09:45.233347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.512 [2024-07-14 14:09:45.233363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.512 [2024-07-14 14:09:45.233592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.512 [2024-07-14 14:09:45.233808] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.512 [2024-07-14 14:09:45.233828] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.512 [2024-07-14 14:09:45.233841] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.512 [2024-07-14 14:09:45.237036] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.512 [2024-07-14 14:09:45.246231] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.512 [2024-07-14 14:09:45.246602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.512 [2024-07-14 14:09:45.246629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.512 [2024-07-14 14:09:45.246644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.512 [2024-07-14 14:09:45.246889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.512 [2024-07-14 14:09:45.247101] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.512 [2024-07-14 14:09:45.247122] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.512 [2024-07-14 14:09:45.247136] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.512 [2024-07-14 14:09:45.250153] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.512 [2024-07-14 14:09:45.259465] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.512 [2024-07-14 14:09:45.259901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.512 [2024-07-14 14:09:45.259929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.512 [2024-07-14 14:09:45.259945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.512 [2024-07-14 14:09:45.260187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.512 [2024-07-14 14:09:45.260387] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.512 [2024-07-14 14:09:45.260406] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.512 [2024-07-14 14:09:45.260419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.512 [2024-07-14 14:09:45.263438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.512 [2024-07-14 14:09:45.272752] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.512 [2024-07-14 14:09:45.273134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.512 [2024-07-14 14:09:45.273162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.512 [2024-07-14 14:09:45.273177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.512 [2024-07-14 14:09:45.273406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.512 [2024-07-14 14:09:45.273620] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.512 [2024-07-14 14:09:45.273640] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.512 [2024-07-14 14:09:45.273652] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.512 [2024-07-14 14:09:45.276670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.512 [2024-07-14 14:09:45.285968] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.512 [2024-07-14 14:09:45.286369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.512 [2024-07-14 14:09:45.286411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.512 [2024-07-14 14:09:45.286426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.512 [2024-07-14 14:09:45.286679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.512 [2024-07-14 14:09:45.286905] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.512 [2024-07-14 14:09:45.286940] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.512 [2024-07-14 14:09:45.286954] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.512 [2024-07-14 14:09:45.289956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.512 [2024-07-14 14:09:45.299275] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.512 [2024-07-14 14:09:45.299716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.512 [2024-07-14 14:09:45.299744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.512 [2024-07-14 14:09:45.299764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.512 [2024-07-14 14:09:45.300005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.512 [2024-07-14 14:09:45.300247] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.512 [2024-07-14 14:09:45.300267] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.512 [2024-07-14 14:09:45.300280] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.512 [2024-07-14 14:09:45.303256] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.512 [2024-07-14 14:09:45.312575] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.513 [2024-07-14 14:09:45.312951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.513 [2024-07-14 14:09:45.312979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.513 [2024-07-14 14:09:45.312995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.513 [2024-07-14 14:09:45.313236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.513 [2024-07-14 14:09:45.313435] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.513 [2024-07-14 14:09:45.313454] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.513 [2024-07-14 14:09:45.313467] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.513 [2024-07-14 14:09:45.316490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.513 [2024-07-14 14:09:45.325780] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.513 [2024-07-14 14:09:45.326148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.513 [2024-07-14 14:09:45.326176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.513 [2024-07-14 14:09:45.326192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.513 [2024-07-14 14:09:45.326421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.513 [2024-07-14 14:09:45.326634] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.513 [2024-07-14 14:09:45.326654] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.513 [2024-07-14 14:09:45.326666] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.513 [2024-07-14 14:09:45.329685] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.513 [2024-07-14 14:09:45.339005] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.513 [2024-07-14 14:09:45.339410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.513 [2024-07-14 14:09:45.339437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.513 [2024-07-14 14:09:45.339453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.513 [2024-07-14 14:09:45.339694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.513 [2024-07-14 14:09:45.339954] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.513 [2024-07-14 14:09:45.339981] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.513 [2024-07-14 14:09:45.339996] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.513 [2024-07-14 14:09:45.343023] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.513 [2024-07-14 14:09:45.352399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.513 [2024-07-14 14:09:45.352813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.513 [2024-07-14 14:09:45.352854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.513 [2024-07-14 14:09:45.352870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.513 [2024-07-14 14:09:45.353093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.513 [2024-07-14 14:09:45.353336] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.513 [2024-07-14 14:09:45.353356] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.513 [2024-07-14 14:09:45.353368] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.513 [2024-07-14 14:09:45.356351] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.513 [2024-07-14 14:09:45.365645] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.513 [2024-07-14 14:09:45.366000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.513 [2024-07-14 14:09:45.366027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.513 [2024-07-14 14:09:45.366043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.513 [2024-07-14 14:09:45.366282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.513 [2024-07-14 14:09:45.366497] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.513 [2024-07-14 14:09:45.366516] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.513 [2024-07-14 14:09:45.366529] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.513 [2024-07-14 14:09:45.369511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.513 [2024-07-14 14:09:45.378866] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.513 [2024-07-14 14:09:45.379388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.513 [2024-07-14 14:09:45.379430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.513 [2024-07-14 14:09:45.379446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.513 [2024-07-14 14:09:45.379686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.513 [2024-07-14 14:09:45.379942] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.513 [2024-07-14 14:09:45.379964] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.513 [2024-07-14 14:09:45.379977] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.513 [2024-07-14 14:09:45.382977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.513 [2024-07-14 14:09:45.392074] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.513 [2024-07-14 14:09:45.392509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.513 [2024-07-14 14:09:45.392551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.513 [2024-07-14 14:09:45.392567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.513 [2024-07-14 14:09:45.392809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.513 [2024-07-14 14:09:45.393056] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.513 [2024-07-14 14:09:45.393078] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.513 [2024-07-14 14:09:45.393091] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.513 [2024-07-14 14:09:45.396090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.513 [2024-07-14 14:09:45.405380] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.513 [2024-07-14 14:09:45.405816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.513 [2024-07-14 14:09:45.405843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.513 [2024-07-14 14:09:45.405859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.513 [2024-07-14 14:09:45.406094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.513 [2024-07-14 14:09:45.406330] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.513 [2024-07-14 14:09:45.406350] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.513 [2024-07-14 14:09:45.406363] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.513 [2024-07-14 14:09:45.409337] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.513 [2024-07-14 14:09:45.418699] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.513 [2024-07-14 14:09:45.419087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.513 [2024-07-14 14:09:45.419115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.513 [2024-07-14 14:09:45.419131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.513 [2024-07-14 14:09:45.419359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.513 [2024-07-14 14:09:45.419587] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.513 [2024-07-14 14:09:45.419609] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.513 [2024-07-14 14:09:45.419623] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.513 [2024-07-14 14:09:45.422957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.513 [2024-07-14 14:09:45.432098] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.513 [2024-07-14 14:09:45.432440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.513 [2024-07-14 14:09:45.432480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.513 [2024-07-14 14:09:45.432495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.513 [2024-07-14 14:09:45.432721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.513 [2024-07-14 14:09:45.432986] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.513 [2024-07-14 14:09:45.433008] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.513 [2024-07-14 14:09:45.433022] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.513 [2024-07-14 14:09:45.436082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.513 [2024-07-14 14:09:45.445352] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.513 [2024-07-14 14:09:45.445730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.513 [2024-07-14 14:09:45.445757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.513 [2024-07-14 14:09:45.445772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.513 [2024-07-14 14:09:45.446025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.513 [2024-07-14 14:09:45.446250] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.513 [2024-07-14 14:09:45.446270] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.513 [2024-07-14 14:09:45.446282] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.514 [2024-07-14 14:09:45.449261] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.514 [2024-07-14 14:09:45.458597] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.514 [2024-07-14 14:09:45.459034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.514 [2024-07-14 14:09:45.459062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.514 [2024-07-14 14:09:45.459078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.514 [2024-07-14 14:09:45.459326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.514 [2024-07-14 14:09:45.459541] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.514 [2024-07-14 14:09:45.459561] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.514 [2024-07-14 14:09:45.459573] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.514 [2024-07-14 14:09:45.462559] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.514 [2024-07-14 14:09:45.471852] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.514 [2024-07-14 14:09:45.472254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.514 [2024-07-14 14:09:45.472296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.514 [2024-07-14 14:09:45.472311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.514 [2024-07-14 14:09:45.472581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.514 [2024-07-14 14:09:45.472780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.514 [2024-07-14 14:09:45.472800] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.514 [2024-07-14 14:09:45.472817] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.514 [2024-07-14 14:09:45.475822] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.514 [2024-07-14 14:09:45.485194] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.514 [2024-07-14 14:09:45.485540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.514 [2024-07-14 14:09:45.485567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.514 [2024-07-14 14:09:45.485583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.514 [2024-07-14 14:09:45.485805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.514 [2024-07-14 14:09:45.486053] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.514 [2024-07-14 14:09:45.486075] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.514 [2024-07-14 14:09:45.486089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.514 [2024-07-14 14:09:45.489189] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.773 [2024-07-14 14:09:45.498587] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.773 [2024-07-14 14:09:45.498994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.773 [2024-07-14 14:09:45.499023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.773 [2024-07-14 14:09:45.499039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.773 [2024-07-14 14:09:45.499268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.773 [2024-07-14 14:09:45.499501] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.773 [2024-07-14 14:09:45.499522] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.773 [2024-07-14 14:09:45.499535] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.773 [2024-07-14 14:09:45.502578] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.773 [2024-07-14 14:09:45.511847] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.773 [2024-07-14 14:09:45.512332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.773 [2024-07-14 14:09:45.512374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.773 [2024-07-14 14:09:45.512390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.773 [2024-07-14 14:09:45.512635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.773 [2024-07-14 14:09:45.512849] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.773 [2024-07-14 14:09:45.512869] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.773 [2024-07-14 14:09:45.512890] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.773 [2024-07-14 14:09:45.515916] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.773 [2024-07-14 14:09:45.525078] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.773 [2024-07-14 14:09:45.525500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.773 [2024-07-14 14:09:45.525531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.773 [2024-07-14 14:09:45.525561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.773 [2024-07-14 14:09:45.525797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.773 [2024-07-14 14:09:45.526045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.773 [2024-07-14 14:09:45.526067] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.773 [2024-07-14 14:09:45.526080] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.773 [2024-07-14 14:09:45.529076] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.773 [2024-07-14 14:09:45.538389] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.773 [2024-07-14 14:09:45.538765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.773 [2024-07-14 14:09:45.538791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.773 [2024-07-14 14:09:45.538806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.773 [2024-07-14 14:09:45.539061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.773 [2024-07-14 14:09:45.539300] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.773 [2024-07-14 14:09:45.539320] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.773 [2024-07-14 14:09:45.539333] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.773 [2024-07-14 14:09:45.542312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.773 [2024-07-14 14:09:45.551684] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.773 [2024-07-14 14:09:45.552076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.773 [2024-07-14 14:09:45.552119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.773 [2024-07-14 14:09:45.552134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.773 [2024-07-14 14:09:45.552388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.773 [2024-07-14 14:09:45.552587] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.773 [2024-07-14 14:09:45.552607] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.773 [2024-07-14 14:09:45.552619] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.773 [2024-07-14 14:09:45.555619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.773 [2024-07-14 14:09:45.564972] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.773 [2024-07-14 14:09:45.565375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.773 [2024-07-14 14:09:45.565419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.773 [2024-07-14 14:09:45.565434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.773 [2024-07-14 14:09:45.565705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.773 [2024-07-14 14:09:45.565957] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.773 [2024-07-14 14:09:45.565978] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.774 [2024-07-14 14:09:45.565992] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.774 [2024-07-14 14:09:45.568991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.774 [2024-07-14 14:09:45.578324] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.774 [2024-07-14 14:09:45.578638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.774 [2024-07-14 14:09:45.578680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.774 [2024-07-14 14:09:45.578695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.774 [2024-07-14 14:09:45.578930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.774 [2024-07-14 14:09:45.579158] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.774 [2024-07-14 14:09:45.579194] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.774 [2024-07-14 14:09:45.579207] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.774 [2024-07-14 14:09:45.582202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.774 [2024-07-14 14:09:45.591573] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.774 [2024-07-14 14:09:45.591965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.774 [2024-07-14 14:09:45.591994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.774 [2024-07-14 14:09:45.592010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.774 [2024-07-14 14:09:45.592239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.774 [2024-07-14 14:09:45.592454] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.774 [2024-07-14 14:09:45.592475] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.774 [2024-07-14 14:09:45.592487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.774 [2024-07-14 14:09:45.595491] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.774 [2024-07-14 14:09:45.604788] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.774 [2024-07-14 14:09:45.605155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.774 [2024-07-14 14:09:45.605183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.774 [2024-07-14 14:09:45.605198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.774 [2024-07-14 14:09:45.605426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.774 [2024-07-14 14:09:45.605641] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.774 [2024-07-14 14:09:45.605661] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.774 [2024-07-14 14:09:45.605674] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.774 [2024-07-14 14:09:45.608657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.774 [2024-07-14 14:09:45.618080] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.774 [2024-07-14 14:09:45.618511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.774 [2024-07-14 14:09:45.618539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.774 [2024-07-14 14:09:45.618555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.774 [2024-07-14 14:09:45.618796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.774 [2024-07-14 14:09:45.619025] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.774 [2024-07-14 14:09:45.619046] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.774 [2024-07-14 14:09:45.619059] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.774 [2024-07-14 14:09:45.622068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.774 [2024-07-14 14:09:45.631360] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.774 [2024-07-14 14:09:45.631737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.774 [2024-07-14 14:09:45.631765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.774 [2024-07-14 14:09:45.631781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.774 [2024-07-14 14:09:45.632030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.774 [2024-07-14 14:09:45.632250] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.774 [2024-07-14 14:09:45.632270] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.774 [2024-07-14 14:09:45.632282] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.774 [2024-07-14 14:09:45.635291] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.774 [2024-07-14 14:09:45.644619] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.774 [2024-07-14 14:09:45.644994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.774 [2024-07-14 14:09:45.645022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.774 [2024-07-14 14:09:45.645055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1608748 Killed "${NVMF_APP[@]}" "$@" 00:34:07.774 [2024-07-14 14:09:45.645309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.774 [2024-07-14 14:09:45.645516] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.774 [2024-07-14 14:09:45.645536] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.774 [2024-07-14 14:09:45.645548] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:07.774 [2024-07-14 14:09:45.648621] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1609703 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1609703 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1609703 ']' 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:07.774 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:07.774 [2024-07-14 14:09:45.658175] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.774 [2024-07-14 14:09:45.658507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.774 [2024-07-14 14:09:45.658534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.774 [2024-07-14 14:09:45.658550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.774 [2024-07-14 14:09:45.658772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.774 [2024-07-14 14:09:45.659024] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.774 [2024-07-14 14:09:45.659047] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.774 [2024-07-14 14:09:45.659063] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.774 [2024-07-14 14:09:45.662189] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.774 [2024-07-14 14:09:45.671643] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.774 [2024-07-14 14:09:45.672011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.774 [2024-07-14 14:09:45.672040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.774 [2024-07-14 14:09:45.672056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.774 [2024-07-14 14:09:45.672271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.774 [2024-07-14 14:09:45.672499] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.774 [2024-07-14 14:09:45.672535] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.774 [2024-07-14 14:09:45.672549] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.774 [2024-07-14 14:09:45.675907] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.774 [2024-07-14 14:09:45.685080] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.774 [2024-07-14 14:09:45.685538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.774 [2024-07-14 14:09:45.685566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.774 [2024-07-14 14:09:45.685588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.774 [2024-07-14 14:09:45.685831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.774 [2024-07-14 14:09:45.686037] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.774 [2024-07-14 14:09:45.686058] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.774 [2024-07-14 14:09:45.686070] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.774 [2024-07-14 14:09:45.689077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.774 [2024-07-14 14:09:45.698383] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.774 [2024-07-14 14:09:45.698777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.774 [2024-07-14 14:09:45.698819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.774 [2024-07-14 14:09:45.698836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.774 [2024-07-14 14:09:45.699058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.774 [2024-07-14 14:09:45.699230] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:07.774 [2024-07-14 14:09:45.699307] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5[2024-07-14 14:09:45.699304] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.774 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.774 [2024-07-14 14:09:45.699336] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.774 [2024-07-14 14:09:45.699349] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.774 [2024-07-14 14:09:45.702426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.774 [2024-07-14 14:09:45.711632] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.774 [2024-07-14 14:09:45.712028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.774 [2024-07-14 14:09:45.712056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.774 [2024-07-14 14:09:45.712072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.775 [2024-07-14 14:09:45.712300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.775 [2024-07-14 14:09:45.712536] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.775 [2024-07-14 14:09:45.712556] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.775 [2024-07-14 14:09:45.712570] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.775 [2024-07-14 14:09:45.716139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.775 [2024-07-14 14:09:45.725627] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.775 [2024-07-14 14:09:45.726061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.775 [2024-07-14 14:09:45.726089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.775 [2024-07-14 14:09:45.726105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.775 [2024-07-14 14:09:45.726364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.775 [2024-07-14 14:09:45.726608] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.775 [2024-07-14 14:09:45.726632] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.775 [2024-07-14 14:09:45.726647] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.775 [2024-07-14 14:09:45.730204] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.775 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.775 [2024-07-14 14:09:45.739592] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.775 [2024-07-14 14:09:45.739994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.775 [2024-07-14 14:09:45.740022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.775 [2024-07-14 14:09:45.740038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.775 [2024-07-14 14:09:45.740276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.775 [2024-07-14 14:09:45.740519] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.775 [2024-07-14 14:09:45.740543] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.775 [2024-07-14 14:09:45.740558] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.775 [2024-07-14 14:09:45.744090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:07.775 [2024-07-14 14:09:45.753521] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.775 [2024-07-14 14:09:45.753926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:07.775 [2024-07-14 14:09:45.753955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:07.775 [2024-07-14 14:09:45.753971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:07.775 [2024-07-14 14:09:45.754200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:07.775 [2024-07-14 14:09:45.754455] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:07.775 [2024-07-14 14:09:45.754480] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:07.775 [2024-07-14 14:09:45.754495] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.033 [2024-07-14 14:09:45.758035] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.033 [2024-07-14 14:09:45.767414] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.033 [2024-07-14 14:09:45.767807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.033 [2024-07-14 14:09:45.767838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.033 [2024-07-14 14:09:45.767856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.033 [2024-07-14 14:09:45.768114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.033 [2024-07-14 14:09:45.768374] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.033 [2024-07-14 14:09:45.768399] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.033 [2024-07-14 14:09:45.768420] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.033 [2024-07-14 14:09:45.771821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:08.033 [2024-07-14 14:09:45.771964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.033 [2024-07-14 14:09:45.781278] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.033 [2024-07-14 14:09:45.781808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.033 [2024-07-14 14:09:45.781848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.033 [2024-07-14 14:09:45.781868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.033 [2024-07-14 14:09:45.782131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.033 [2024-07-14 14:09:45.782396] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.033 [2024-07-14 14:09:45.782421] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.033 [2024-07-14 14:09:45.782439] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.033 [2024-07-14 14:09:45.785965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.033 [2024-07-14 14:09:45.795262] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.033 [2024-07-14 14:09:45.795700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.033 [2024-07-14 14:09:45.795747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.033 [2024-07-14 14:09:45.795765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.033 [2024-07-14 14:09:45.796014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.033 [2024-07-14 14:09:45.796257] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.033 [2024-07-14 14:09:45.796282] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.033 [2024-07-14 14:09:45.796298] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.033 [2024-07-14 14:09:45.799798] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.033 [2024-07-14 14:09:45.809113] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.033 [2024-07-14 14:09:45.809513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.033 [2024-07-14 14:09:45.809540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.033 [2024-07-14 14:09:45.809556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.034 [2024-07-14 14:09:45.809792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.034 [2024-07-14 14:09:45.810043] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.034 [2024-07-14 14:09:45.810065] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.034 [2024-07-14 14:09:45.810079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.034 [2024-07-14 14:09:45.813633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.034 [2024-07-14 14:09:45.823033] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.034 [2024-07-14 14:09:45.823560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.034 [2024-07-14 14:09:45.823609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.034 [2024-07-14 14:09:45.823629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.034 [2024-07-14 14:09:45.823905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.034 [2024-07-14 14:09:45.824145] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.034 [2024-07-14 14:09:45.824182] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.034 [2024-07-14 14:09:45.824199] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.034 [2024-07-14 14:09:45.827747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.034 [2024-07-14 14:09:45.836864] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.034 [2024-07-14 14:09:45.837353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.034 [2024-07-14 14:09:45.837389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.034 [2024-07-14 14:09:45.837408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.034 [2024-07-14 14:09:45.837652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.034 [2024-07-14 14:09:45.837907] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.034 [2024-07-14 14:09:45.837943] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.034 [2024-07-14 14:09:45.837958] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.034 [2024-07-14 14:09:45.841444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.034 [2024-07-14 14:09:45.850732] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.034 [2024-07-14 14:09:45.851141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.034 [2024-07-14 14:09:45.851170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.034 [2024-07-14 14:09:45.851187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.034 [2024-07-14 14:09:45.851438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.034 [2024-07-14 14:09:45.851681] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.034 [2024-07-14 14:09:45.851706] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.034 [2024-07-14 14:09:45.851721] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.034 [2024-07-14 14:09:45.855248] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.034 [2024-07-14 14:09:45.862561] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.034 [2024-07-14 14:09:45.862599] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.034 [2024-07-14 14:09:45.862615] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.034 [2024-07-14 14:09:45.862629] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.034 [2024-07-14 14:09:45.862648] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.034 [2024-07-14 14:09:45.862728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:08.034 [2024-07-14 14:09:45.862845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:08.034 [2024-07-14 14:09:45.862847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.034 [2024-07-14 14:09:45.864292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.034 [2024-07-14 14:09:45.864661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.034 [2024-07-14 14:09:45.864691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.034 [2024-07-14 14:09:45.864708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.034 [2024-07-14 14:09:45.864938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.034 [2024-07-14 14:09:45.865160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.034 [2024-07-14 14:09:45.865182] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.034 [2024-07-14 14:09:45.865211] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.034 [2024-07-14 14:09:45.868378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.034 [2024-07-14 14:09:45.877763] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.034 [2024-07-14 14:09:45.878312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.034 [2024-07-14 14:09:45.878351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.034 [2024-07-14 14:09:45.878371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.034 [2024-07-14 14:09:45.878608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.034 [2024-07-14 14:09:45.878825] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.034 [2024-07-14 14:09:45.878846] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.034 [2024-07-14 14:09:45.878862] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.034 [2024-07-14 14:09:45.882061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.034 [2024-07-14 14:09:45.891410] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.034 [2024-07-14 14:09:45.891950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.034 [2024-07-14 14:09:45.891994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.034 [2024-07-14 14:09:45.892014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.034 [2024-07-14 14:09:45.892253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.034 [2024-07-14 14:09:45.892470] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.034 [2024-07-14 14:09:45.892492] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.034 [2024-07-14 14:09:45.892508] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.034 [2024-07-14 14:09:45.895674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.034 [2024-07-14 14:09:45.904997] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.034 [2024-07-14 14:09:45.905636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.034 [2024-07-14 14:09:45.905680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.034 [2024-07-14 14:09:45.905698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.034 [2024-07-14 14:09:45.905932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.034 [2024-07-14 14:09:45.906156] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.034 [2024-07-14 14:09:45.906193] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.034 [2024-07-14 14:09:45.906210] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.034 [2024-07-14 14:09:45.909374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.034 [2024-07-14 14:09:45.918453] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.034 [2024-07-14 14:09:45.918888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.034 [2024-07-14 14:09:45.918925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.034 [2024-07-14 14:09:45.918943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.034 [2024-07-14 14:09:45.919178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.034 [2024-07-14 14:09:45.919393] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.034 [2024-07-14 14:09:45.919415] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.034 [2024-07-14 14:09:45.919430] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.034 [2024-07-14 14:09:45.922633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.034 [2024-07-14 14:09:45.932020] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.034 [2024-07-14 14:09:45.932498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.034 [2024-07-14 14:09:45.932540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.034 [2024-07-14 14:09:45.932560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.034 [2024-07-14 14:09:45.932798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.034 [2024-07-14 14:09:45.933047] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.034 [2024-07-14 14:09:45.933070] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.034 [2024-07-14 14:09:45.933087] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.034 [2024-07-14 14:09:45.936371] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.034 [2024-07-14 14:09:45.945602] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.034 [2024-07-14 14:09:45.946110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.034 [2024-07-14 14:09:45.946148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.034 [2024-07-14 14:09:45.946167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.034 [2024-07-14 14:09:45.946415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.034 [2024-07-14 14:09:45.946631] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.034 [2024-07-14 14:09:45.946652] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.034 [2024-07-14 14:09:45.946667] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.034 [2024-07-14 14:09:45.949836] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.034 [2024-07-14 14:09:45.958961] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.034 [2024-07-14 14:09:45.959324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.034 [2024-07-14 14:09:45.959351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.035 [2024-07-14 14:09:45.959367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.035 [2024-07-14 14:09:45.959597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.035 [2024-07-14 14:09:45.959810] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.035 [2024-07-14 14:09:45.959831] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.035 [2024-07-14 14:09:45.959845] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.035 [2024-07-14 14:09:45.963141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.035 [2024-07-14 14:09:45.972529] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.035 [2024-07-14 14:09:45.972908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.035 [2024-07-14 14:09:45.972937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.035 [2024-07-14 14:09:45.972953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.035 [2024-07-14 14:09:45.973167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.035 [2024-07-14 14:09:45.973385] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.035 [2024-07-14 14:09:45.973407] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.035 [2024-07-14 14:09:45.973420] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.035 [2024-07-14 14:09:45.976679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.035 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:08.035 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:08.035 14:09:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:08.035 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:08.035 14:09:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.035 [2024-07-14 14:09:45.986216] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.035 [2024-07-14 14:09:45.986589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.035 [2024-07-14 14:09:45.986617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.035 [2024-07-14 14:09:45.986632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.035 [2024-07-14 14:09:45.986852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.035 [2024-07-14 14:09:45.987108] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.035 [2024-07-14 14:09:45.987131] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.035 [2024-07-14 14:09:45.987145] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.035 [2024-07-14 14:09:45.990418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.035 [2024-07-14 14:09:45.999638] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.035 [2024-07-14 14:09:46.000046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.035 [2024-07-14 14:09:46.000074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.035 [2024-07-14 14:09:46.000089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.035 [2024-07-14 14:09:46.000303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.035 [2024-07-14 14:09:46.000531] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.035 [2024-07-14 14:09:46.000553] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.035 [2024-07-14 14:09:46.000567] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.035 [2024-07-14 14:09:46.003737] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.035 14:09:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.035 14:09:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:08.035 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.035 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.035 [2024-07-14 14:09:46.009611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.035 [2024-07-14 14:09:46.013368] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.035 [2024-07-14 14:09:46.013711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.035 [2024-07-14 14:09:46.013740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.035 [2024-07-14 14:09:46.013757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.035 [2024-07-14 14:09:46.013982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.035 [2024-07-14 14:09:46.014202] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.035 [2024-07-14 14:09:46.014224] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.035 [2024-07-14 14:09:46.014238] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.295 [2024-07-14 14:09:46.017533] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.295 [2024-07-14 14:09:46.026970] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.295 [2024-07-14 14:09:46.027380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-14 14:09:46.027409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.295 [2024-07-14 14:09:46.027426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.295 [2024-07-14 14:09:46.027654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.295 [2024-07-14 14:09:46.027893] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.295 [2024-07-14 14:09:46.027915] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.295 [2024-07-14 14:09:46.027929] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.295 [2024-07-14 14:09:46.031129] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.295 [2024-07-14 14:09:46.040468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.295 [2024-07-14 14:09:46.040978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-14 14:09:46.041020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.295 [2024-07-14 14:09:46.041038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.295 [2024-07-14 14:09:46.041278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.295 [2024-07-14 14:09:46.041495] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.295 [2024-07-14 14:09:46.041517] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.295 [2024-07-14 14:09:46.041533] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.295 [2024-07-14 14:09:46.044694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.295 Malloc0 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.295 [2024-07-14 14:09:46.054099] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.295 [2024-07-14 14:09:46.054465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.295 [2024-07-14 14:09:46.054493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151a1e0 with addr=10.0.0.2, port=4420 00:34:08.295 [2024-07-14 14:09:46.054509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151a1e0 is same with the state(5) to be set 00:34:08.295 [2024-07-14 14:09:46.054724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a1e0 (9): Bad file descriptor 00:34:08.295 [2024-07-14 14:09:46.054953] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.295 [2024-07-14 14:09:46.054975] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.295 [2024-07-14 14:09:46.054998] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.295 [2024-07-14 14:09:46.058233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.295 [2024-07-14 14:09:46.065477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.295 [2024-07-14 14:09:46.067795] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.295 14:09:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1609036 00:34:08.295 [2024-07-14 14:09:46.233277] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:18.274 00:34:18.274 Latency(us) 00:34:18.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.274 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:18.274 Verification LBA range: start 0x0 length 0x4000 00:34:18.274 Nvme1n1 : 15.01 6393.45 24.97 9188.16 0.00 8188.39 825.27 20291.89 00:34:18.274 =================================================================================================================== 00:34:18.274 Total : 6393.45 24.97 9188.16 0.00 8188.39 825.27 20291.89 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:18.274 rmmod nvme_tcp 00:34:18.274 rmmod nvme_fabrics 00:34:18.274 rmmod nvme_keyring 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1609703 ']' 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1609703 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 1609703 ']' 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 1609703 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1609703 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1609703' 00:34:18.274 killing process with pid 1609703 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 1609703 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 1609703 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:18.274 14:09:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.182 14:09:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:20.182 00:34:20.182 real 0m22.239s 00:34:20.182 user 0m58.891s 00:34:20.182 sys 0m4.455s 00:34:20.182 14:09:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:20.182 14:09:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.182 ************************************ 00:34:20.182 END TEST nvmf_bdevperf 00:34:20.182 ************************************ 00:34:20.182 14:09:57 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:20.182 14:09:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:20.182 14:09:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:20.182 14:09:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:20.182 ************************************ 00:34:20.182 START TEST nvmf_target_disconnect 00:34:20.182 ************************************ 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:20.182 * Looking for test storage... 00:34:20.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:20.182 14:09:57 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:22.083 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:22.083 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:22.083 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:22.083 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:22.084 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:22.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:34:22.084 00:34:22.084 --- 10.0.0.2 ping statistics --- 00:34:22.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.084 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:34:22.084 00:34:22.084 --- 10.0.0.1 ping statistics --- 00:34:22.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.084 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:22.084 ************************************ 00:34:22.084 START TEST nvmf_target_disconnect_tc1 00:34:22.084 ************************************ 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:22.084 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.084 [2024-07-14 14:09:59.946225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.084 [2024-07-14 14:09:59.946294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1244740 with addr=10.0.0.2, port=4420 00:34:22.084 [2024-07-14 14:09:59.946331] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:22.084 [2024-07-14 14:09:59.946357] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:22.084 [2024-07-14 14:09:59.946372] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:22.084 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:22.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:22.084 Initializing NVMe Controllers 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:22.084 00:34:22.084 real 0m0.096s 00:34:22.084 user 0m0.045s 00:34:22.084 sys 0m0.051s 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:22.084 ************************************ 00:34:22.084 END TEST nvmf_target_disconnect_tc1 00:34:22.084 ************************************ 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:22.084 ************************************ 00:34:22.084 START TEST nvmf_target_disconnect_tc2 00:34:22.084 ************************************ 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:22.084 14:09:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.084 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1612845 00:34:22.084 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:22.084 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1612845 00:34:22.084 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1612845 ']' 00:34:22.084 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.084 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:22.084 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.084 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:22.084 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.084 [2024-07-14 14:10:00.048198] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:22.084 [2024-07-14 14:10:00.048282] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.343 EAL: No free 2048 kB hugepages reported on node 1 00:34:22.343 [2024-07-14 14:10:00.121608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:22.343 [2024-07-14 14:10:00.214188] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.343 [2024-07-14 14:10:00.214258] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.343 [2024-07-14 14:10:00.214285] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.343 [2024-07-14 14:10:00.214297] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.343 [2024-07-14 14:10:00.214316] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.343 [2024-07-14 14:10:00.214394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:22.343 [2024-07-14 14:10:00.214460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:22.343 [2024-07-14 14:10:00.214566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:22.343 [2024-07-14 14:10:00.214573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.601 Malloc0 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.601 [2024-07-14 14:10:00.379541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.601 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.602 [2024-07-14 14:10:00.407780] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1612874 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:22.602 14:10:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:22.602 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.536 14:10:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1612845 00:34:24.536 14:10:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 [2024-07-14 14:10:02.432348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 [2024-07-14 14:10:02.432727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Read completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.536 Write completed with error (sct=0, sc=8) 00:34:24.536 starting I/O failed 00:34:24.537 [2024-07-14 14:10:02.433056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Write completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Write completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Write completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Write completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Write completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 Read completed with error (sct=0, sc=8) 00:34:24.537 starting I/O failed 00:34:24.537 [2024-07-14 14:10:02.433365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:24.537 [2024-07-14 14:10:02.433613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.433656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.433755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.433782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.433922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.433949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.434051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.434077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.434192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.434219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.434311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.434337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.434482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.434507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.434597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.434623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.434764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.434793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.434937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.434963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.435071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.435115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.435231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.435259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.435420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.435464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.435609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.435638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.435740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.435766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.435896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.435939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.436068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.436093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.436234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.436276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.436399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.436425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.436510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.436553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.436683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.436708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.436855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.436886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.437007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.437031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.437123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.437148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.437248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.437274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.437381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.437406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.437492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.437517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.437647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.537 [2024-07-14 14:10:02.437672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.537 qpair failed and we were unable to recover it. 00:34:24.537 [2024-07-14 14:10:02.437789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.437814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.437903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.437929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.438029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.438055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.438145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.438177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.438270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.438295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.438391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.438416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.438503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.438528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.438645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.438670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.438764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.438789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.438918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.438957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.439067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.439106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.439252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.439277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.439399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.439424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.439542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.439567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.439655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.439699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.439856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.439892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.440010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.440035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.440117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.440142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.440258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.440283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.440420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.440452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.440575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.440615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.440751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.440776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.440860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.440889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.440992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.441017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.441101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.441126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.441223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.441248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.441344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.441369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.441487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.441513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.441626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.441651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.442206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.442231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.442346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.442387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.442490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.442517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.442648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.442690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.442804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.442829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.442958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.442983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.443092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.443117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.443252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.443292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.443382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.443408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.443525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.443550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.443643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.538 [2024-07-14 14:10:02.443667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.538 qpair failed and we were unable to recover it. 00:34:24.538 [2024-07-14 14:10:02.443779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.443804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.443911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.443938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.444029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.444054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.444136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.444178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.444360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.444385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.444533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.444575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.444724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.444749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.444841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.444865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.444993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.445020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.445150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.445192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.445316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.445342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.445437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.445463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.445578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.445603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.445720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.445745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.445864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.445896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.446018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.446043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.446186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.446211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.446300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.446325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.446416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.446441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.446526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.446551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.446662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.446687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.446766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.446792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.446893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.446920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.447011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.447039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.447158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.447184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.447332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.447358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.447473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.447499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.447681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.447706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.447822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.447847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.448009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.448036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.448171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.448199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.448312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.448337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.448434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.448459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.448572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.448597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.448710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.448735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.448818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.448846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.449009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.449047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.449182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.449209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.449327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.449370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.449540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.449565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.539 [2024-07-14 14:10:02.449654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.539 [2024-07-14 14:10:02.449679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.539 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.449798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.449824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.449946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.449974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.450091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.450116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.450244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.450270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.450385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.450411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.450529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.450554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.450676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.450702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.450818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.450843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.450940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.450967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.451111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.451136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.451254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.451281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.451397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.451423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.451509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.451533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.451667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.451696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.451836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.451862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.451985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.452012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.452095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.452120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.452275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.452302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.452395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.452420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.452545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.452574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.452736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.452762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.452880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.452906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.452996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.453026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.453109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.453134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.453219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.453245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.453349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.453392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.453509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.453535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.453677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.453703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.453842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.540 [2024-07-14 14:10:02.453889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.540 qpair failed and we were unable to recover it. 00:34:24.540 [2024-07-14 14:10:02.454002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.454028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.454138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.454165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.454260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.454286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.454379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.454405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.454519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.454547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.454638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.454665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.454784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.454810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.454906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.454933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.455048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.455074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.455192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.455218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.455309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.455335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.455445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.455490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.455630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.455656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.455776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.455804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.455985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.456024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.456119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.456145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.456257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.456282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.456389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.456417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.456555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.456579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.456696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.456721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.456883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.456926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.457041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.457066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.457156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.457182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.457336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.457361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.457477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.457501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.457623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.457647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.457768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.457794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.457886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.457911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.457999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.458024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.458135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.458159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.458299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.458324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.458434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.458476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.458573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.458616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.458710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.458740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.458844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.458873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.459006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.459033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.459128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.459154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.459243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.459269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.459355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.459382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.459526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.459552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.541 qpair failed and we were unable to recover it. 00:34:24.541 [2024-07-14 14:10:02.459713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.541 [2024-07-14 14:10:02.459743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.459901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.459928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.460047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.460072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.460158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.460182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.460301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.460343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.460473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.460498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.460590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.460615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.460739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.460764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.460848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.460873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.460965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.460990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.461074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.461099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.461214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.461239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.461367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.461391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.461541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.461569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.461712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.461737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.461838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.461863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.461959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.461984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.462072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.462098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.462211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.462235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.462397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.462422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.462532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.462561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.462678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.462703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.462833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.462871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.463006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.463032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.463140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.463165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.463271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.463298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.463426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.463451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.463543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.463569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.463711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.463737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.463832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.463857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.464007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.464032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.464121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.464146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.464296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.464321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.464443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.464468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.464619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.464643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.464755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.464780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.464882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.464922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.465083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.465111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.465224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.465251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.465372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.465398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.465514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.542 [2024-07-14 14:10:02.465540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.542 qpair failed and we were unable to recover it. 00:34:24.542 [2024-07-14 14:10:02.465667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.465694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.465820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.465846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.465946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.465975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.466135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.466160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.466279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.466305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.466396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.466422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.466518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.466548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.466662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.466687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.466810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.466848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.466985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.467012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.467135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.467160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.467246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.467271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.467382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.467407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.467521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.467545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.467646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.467674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.467840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.467865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.467964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.467993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.468093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.468120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.468203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.468228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.468339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.468365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.468482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.468511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.468678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.468704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.468823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.468849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.468948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.468974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.469070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.469095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.469211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.469236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.469422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.469480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.469621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.469646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.469760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.469785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.469924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.469950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.470065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.470089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.470200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.470225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.470374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.470399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.470483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.470512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.470603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.470628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.470759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.470815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.470965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.470992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.471115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.471141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.471234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.471259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.471346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.471371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.543 [2024-07-14 14:10:02.471486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.543 [2024-07-14 14:10:02.471511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.543 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.471653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.471679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.471791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.471816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.471906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.471931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.472046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.472070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.472183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.472208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.472322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.472347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.472518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.472543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.472660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.472684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.472791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.472830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.472995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.473022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.473144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.473170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.473258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.473283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.473393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.473419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.473508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.473533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.473618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.473643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.473768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.473795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.473931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.473957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.474072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.474097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.474203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.474231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.474363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.474392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.474509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.474534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.474673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.474701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.474811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.474836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.474972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.475011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.475111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.475137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.475295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.475320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.475463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.475505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.475634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.475661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.475774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.475801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.475921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.475947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.476029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.476054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.476149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.476174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.476293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.476317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.476459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.476501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.476617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.476642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.476755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.476780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.476903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.476943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.477059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.477086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.477210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.477236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.544 qpair failed and we were unable to recover it. 00:34:24.544 [2024-07-14 14:10:02.477385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.544 [2024-07-14 14:10:02.477414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.477523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.477549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.477670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.477697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.477834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.477863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.478003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.478028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.478143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.478168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.478345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.478395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.478508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.478535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.478752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.478783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.478910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.478967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.479056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.479082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.479205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.479231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.479355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.479421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.479522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.479547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.479661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.479686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.479846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.479874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.480019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.480044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.480176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.480218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.480346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.480374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.480496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.480521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.480637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.480662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.480799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.480827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.480930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.480972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.481090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.481116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.481284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.481312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.481424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.481449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.481542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.481567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.481699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.481727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.481864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.481897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.481990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.482015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.482106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.482132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.482290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.482316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.482427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.482452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.482605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.482630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.482747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.482773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.482866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.545 [2024-07-14 14:10:02.482911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.545 qpair failed and we were unable to recover it. 00:34:24.545 [2024-07-14 14:10:02.483033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.483059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.483202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.483227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.483321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.483347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.483450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.483477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.483589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.483614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.483711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.483736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.483873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.483904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.483988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.484012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.484159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.484184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.484300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.484328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.484462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.484488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.484588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.484619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.484726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.484754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.484892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.484918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.485033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.485058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.485232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.485257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.485367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.485392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.485511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.485538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.485628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.485653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.485759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.485783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.485896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.485922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.486049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.486088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.486187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.486216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.486314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.486341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.486459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.486487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.486639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.486664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.486751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.486776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.486901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.486927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.487043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.487068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.487197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.487222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.487338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.487366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.487505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.487531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.487625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.487653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.487764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.487794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.487929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.487957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.488043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.488069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.488204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.488234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.488370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.488396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.488489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.488520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.546 [2024-07-14 14:10:02.488627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.546 [2024-07-14 14:10:02.488655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.546 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.488791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.488815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.488923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.488954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.489064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.489089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.489178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.489202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.489326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.489353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.489454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.489483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.489592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.489617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.489732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.489758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.489871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.489907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.490020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.490045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.490229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.490255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.490392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.490420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.490564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.490589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.490698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.490724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.490863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.490899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.491007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.491033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.491115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.491140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.491281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.491311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.491443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.491468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.491576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.491605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.491750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.491777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.491917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.491942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.492022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.492046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.492155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.492198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.492304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.492328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.492422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.492450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.492589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.492617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.492753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.492779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.492898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.492925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.493039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.493064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.493181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.493207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.493297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.493323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.493454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.493483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.493620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.493645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.493753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.493780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.493895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.493925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.494053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.494077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.494193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.494217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.494322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.494354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.547 [2024-07-14 14:10:02.494485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.547 [2024-07-14 14:10:02.494510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.547 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.494623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.494647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.494799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.494841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.494992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.495019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.495137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.495162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.495356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.495401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.495509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.495536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.495622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.495647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.495785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.495814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.495963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.495989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.496075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.496100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.496206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.496231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.496315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.496339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.496454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.496479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.496586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.496614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.496757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.496782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.496929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.496968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.497073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.497101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.497202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.497229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.497355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.497381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.497548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.497577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.497720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.497745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.497860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.497892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.498037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.498062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.498149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.498175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.498317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.498342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.498566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.498626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.498735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.498760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.498856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.498901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.499025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.499052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.499169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.499195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.499312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.499337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.499583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.499634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.499736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.499762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.499889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.499915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.500039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.500064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.500204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.500229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.500331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.500356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.500488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.548 [2024-07-14 14:10:02.500517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.548 qpair failed and we were unable to recover it. 00:34:24.548 [2024-07-14 14:10:02.500622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.500651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.832 [2024-07-14 14:10:02.500736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.500764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.832 [2024-07-14 14:10:02.500861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.500904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.832 [2024-07-14 14:10:02.501002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.501028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.832 [2024-07-14 14:10:02.501143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.501168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.832 [2024-07-14 14:10:02.501299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.501328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.832 [2024-07-14 14:10:02.501470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.501496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.832 [2024-07-14 14:10:02.501578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.501603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.832 [2024-07-14 14:10:02.501741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.501770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.832 [2024-07-14 14:10:02.501898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.501924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.832 [2024-07-14 14:10:02.502034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.502060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.832 [2024-07-14 14:10:02.502174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.832 [2024-07-14 14:10:02.502200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.832 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.502310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.502337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.502423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.502449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.502593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.502622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.502733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.502759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.502850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.502880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.502973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.502999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.503091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.503116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.503214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.503239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.503365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.503393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.503525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.503552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.503642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.503668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.503851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.503900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.504048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.504075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.504166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.504192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.504267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.504292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.504451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.504482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.504600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.504625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.504770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.504800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.504946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.504973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.505116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.505141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.505283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.505311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.505469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.505494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.505584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.505610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.505746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.505775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.505890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.505915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.506009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.506035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.506196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.506224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.506364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.506390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.506479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.506505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.506637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.506666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.506772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.506798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.506944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.506970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.507087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.507113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.507215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.507241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.507362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.507387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.507475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.507500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.507616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.507641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.507739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.507777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.507895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.507934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.833 qpair failed and we were unable to recover it. 00:34:24.833 [2024-07-14 14:10:02.508058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.833 [2024-07-14 14:10:02.508085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.508247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.508275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.508400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.508428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.508547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.508573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.508665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.508690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.508849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.508882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.508994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.509019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.509133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.509158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.509272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.509299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.509388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.509429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.509522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.509547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.509645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.509684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.509810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.509836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.509941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.509966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.510045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.510070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.510185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.510210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.510303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.510332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.510494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.510523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.510627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.510652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.510789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.510814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.510931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.510973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.511117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.511142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.511254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.511295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.511451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.511478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.511638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.511663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.511766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.511805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.511936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.511979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.512099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.512123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.512248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.512273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.512393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.512418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.512509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.512533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.512646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.512685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.512868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.512918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.513063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.513090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.513262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.513291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.513460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.513507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.513650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.513676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.513789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.513816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.514026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.514052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.514166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.514191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.514338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.834 [2024-07-14 14:10:02.514381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.834 qpair failed and we were unable to recover it. 00:34:24.834 [2024-07-14 14:10:02.514548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.514598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.514763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.514787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.514908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.514940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.515061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.515085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.515170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.515194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.515287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.515311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.515389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.515415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.515529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.515557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.515644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.515670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.515823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.515868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.516040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.516067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.516232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.516260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.516354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.516381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.516519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.516543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.516658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.516683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.516788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.516815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.516971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.516997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.517093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.517118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.517219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.517244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.517358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.517383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.517496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.517521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.517663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.517688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.517814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.517856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.518004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.518030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.518143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.518168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.518314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.518338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.518431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.518471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.518637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.518661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.518747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.518772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.518919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.518952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.519062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.519090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.519225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.519250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.519363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.519387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.519501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.519528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.519637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.519662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.519781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.519806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.519928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.519972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.520058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.520084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.520206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.520231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.520362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.520390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.835 [2024-07-14 14:10:02.520531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.835 [2024-07-14 14:10:02.520556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.835 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.520644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.520669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.520840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.520868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.521010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.521035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.521155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.521181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.521371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.521396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.521510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.521535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.521632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.521671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.521851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.521888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.522021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.522046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.522128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.522154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.522291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.522319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.522458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.522483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.522595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.522620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.522783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.522812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.522958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.522985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.523079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.523109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.523256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.523286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.523424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.523449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.523558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.523585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.523723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.523751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.523886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.523911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.524054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.524079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.524181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.524209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.524344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.524368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.524515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.524555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.524719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.524744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.524861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.524891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.525006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.525031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.525114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.525154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.525302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.525327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.525407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.525432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.525543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.525572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.525738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.525763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.525886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.525911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.526002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.526027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.526109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.526134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.526250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.836 [2024-07-14 14:10:02.526275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.836 qpair failed and we were unable to recover it. 00:34:24.836 [2024-07-14 14:10:02.526383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.526410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.526572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.526597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.526685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.526711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.526861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.526930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.527031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.527058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.527174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.527205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.527337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.527389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.527518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.527542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.527678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.527702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.527906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.527932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.528050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.528075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.528163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.528187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.528362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.528412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.528545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.528569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.528708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.528733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.528882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.528913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.529044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.529068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.529180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.529204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.529347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.529375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.529526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.529550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.529640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.529665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.529769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.529796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.529961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.529986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.530099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.530123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.530278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.530303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.530418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.530443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.530583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.530623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.530748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.530774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.530909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.530934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.531049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.531075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.531211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.531239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.531351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.531375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.531499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.531524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.531654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.531683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.531854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.531884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.532000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.532025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.837 qpair failed and we were unable to recover it. 00:34:24.837 [2024-07-14 14:10:02.532108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.837 [2024-07-14 14:10:02.532133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.532264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.532288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.532403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.532428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.532538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.532564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.532725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.532750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.532861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.532892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.533007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.533030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.533115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.533138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.533261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.533285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.533464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.533493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.533608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.533633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.533747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.533772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.533889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.533933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.534030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.534056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.534174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.534199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.534304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.534331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.534446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.534471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.534589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.534614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.534730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.534773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.534948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.534976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.535069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.535094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.535202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.535230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.535393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.535418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.535511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.535536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.535647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.535672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.535791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.535816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.535933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.535959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.536075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.536100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.536183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.536208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.536297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.536322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.536431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.536461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.536605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.536630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.536730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.536770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.536943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.536973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.537085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.537109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.537223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.537248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.537379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.537411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.537523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.537548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.537668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.537694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.537804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.537837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.537954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.838 [2024-07-14 14:10:02.537981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.838 qpair failed and we were unable to recover it. 00:34:24.838 [2024-07-14 14:10:02.538099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.538124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.538268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.538297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.538461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.538486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.538594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.538620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.538786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.538813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.538925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.538951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.539039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.539064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.539158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.539183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.539270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.539295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.539415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.539440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.539556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.539583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.539695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.539721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.539835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.539863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.539973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.540000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.540118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.540144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.540238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.540263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.540434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.540463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.540614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.540642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.540773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.540802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.540901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.540943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.541031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.541057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.541171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.541196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.541353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.541408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.541552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.541577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.541723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.541751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.541905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.541963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.542085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.542112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.542207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.542232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.542434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.542459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.542549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.542574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.542726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.542753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.542856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.542896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.543039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.543065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.543179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.543204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.543374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.543425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.543560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.543586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.543703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.543728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.543894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.543921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.544061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.544086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.544197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.544238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.839 [2024-07-14 14:10:02.544335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.839 [2024-07-14 14:10:02.544363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.839 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.544524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.544548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.544707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.544738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.544836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.544866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.545011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.545036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.545176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.545202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.545376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.545429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.545593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.545619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.545708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.545734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.545853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.545890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.546034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.546059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.546156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.546183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.546274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.546299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.546443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.546468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.546559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.546584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.546719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.546747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.546853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.546903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.547012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.547037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.547121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.547146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.547263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.547288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.547406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.547430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.547541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.547569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.547705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.547730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.547857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.547888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.548015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.548043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.548181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.548206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.548323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.548347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.548435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.548479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.548628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.548653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.548763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.548788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.548923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.548952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.549058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.549082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.549222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.549247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.549364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.549389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.549505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.549530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.549646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.549670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.549812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.549845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.549961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.549986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.550102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.550127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.550265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.550297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.550433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.550460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.840 [2024-07-14 14:10:02.550602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.840 [2024-07-14 14:10:02.550644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.840 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.550772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.550801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.550933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.550960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.551046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.551071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.551243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.551269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.551387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.551413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.551585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.551627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.551774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.551801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.551944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.551969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.552087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.552112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.552214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.552242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.552388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.552413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.552551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.552592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.552716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.552743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.552888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.552913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.553033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.553058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.553177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.553202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.553313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.553338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.553485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.553530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.553697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.553724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.553838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.553865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.553991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.554016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.554142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.554184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.554296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.554322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.554436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.554462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.554602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.554629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.554738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.554763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.554883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.554909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.554992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.555017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.555159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.555184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.555303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.555345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.555502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.555533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.555674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.555700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.555844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.555897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.556038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.556064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.556156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.841 [2024-07-14 14:10:02.556182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.841 qpair failed and we were unable to recover it. 00:34:24.841 [2024-07-14 14:10:02.556276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.556301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.556430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.556458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.556591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.556616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.556734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.556759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.556893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.556921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.557050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.557074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.557188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.557213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.557353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.557381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.557513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.557538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.557688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.557731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.557882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.557913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.558055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.558080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.558197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.558222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.558407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.558433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.558550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.558574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.558687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.558712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.558828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.558853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.558978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.559003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.559118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.559143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.559252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.559279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.559417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.559442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.559557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.559582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.559744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.559772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.559872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.559903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.559989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.560014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.560135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.560176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.560311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.560336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.560433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.560459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.560623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.560664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.560756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.560781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.560896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.560934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.561036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.561063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.561208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.561233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.561349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.561374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.561490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.561516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.561632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.561657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.561743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.561769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.561900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.561939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.562063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.562090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.562208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.562233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.842 [2024-07-14 14:10:02.562324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.842 [2024-07-14 14:10:02.562350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.842 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.562498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.562523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.562635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.562676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.562796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.562824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.562958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.562983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.563119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.563144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.563308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.563336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.563450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.563475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.563554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.563579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.563689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.563732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.563845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.563870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.563989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.564014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.564128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.564169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.564280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.564305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.564392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.564417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.564545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.564572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.564683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.564723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.564874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.564937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.565033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.565061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.565176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.565201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.565286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.565312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.565443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.565471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.565613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.565638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.565753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.565778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.565926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.565970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.566093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.566120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.566265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.566292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.566458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.566513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.566652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.566678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.566794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.566820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.566981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.567007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.567095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.567122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.567240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.567266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.567377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.567406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.567550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.567576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.567671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.567697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.567809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.567834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.567951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.567977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.568087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.568113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.568206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.568233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.568323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.843 [2024-07-14 14:10:02.568354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.843 qpair failed and we were unable to recover it. 00:34:24.843 [2024-07-14 14:10:02.568470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.568498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.568607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.568636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.568771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.568796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.568890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.568916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.569024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.569049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.569158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.569182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.569273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.569300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.569404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.569433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.569567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.569594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.569693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.569718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.569850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.569884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.570003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.570029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.570146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.570172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.570319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.570348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.570489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.570514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.570605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.570631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.570735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.570763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.570901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.570926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.571019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.571044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.571154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.571179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.571295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.571320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.571444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.571488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.571609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.571639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.571774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.571800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.571919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.571945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.572053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.572082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.572258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.572284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.572419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.572449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.572577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.572605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.572740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.572766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.572857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.572890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.573006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.573030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.573147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.573172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.573260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.573285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.573421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.573474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.573611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.573635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.573731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.573758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.573858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.573907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.574029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.574055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.574195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.574244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.844 [2024-07-14 14:10:02.574401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.844 [2024-07-14 14:10:02.574451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.844 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.574558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.574582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.574674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.574700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.574790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.574836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.574962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.574989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.575084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.575110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.575202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.575245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.575384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.575410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.575553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.575595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.575748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.575775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.575918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.575943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.576089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.576114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.576224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.576272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.576419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.576444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.576549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.576574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.576710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.576737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.576897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.576922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.577042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.577069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.577215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.577243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.577352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.577376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.577491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.577519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.577627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.577655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.577786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.577811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.577937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.577966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.578111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.578140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.578275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.578302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.578442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.578473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.578586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.578616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.578751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.578777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.578897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.578924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.579084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.579127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.579269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.579296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.579380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.579407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.579513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.579542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.579625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.579654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.579779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.845 [2024-07-14 14:10:02.579806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.845 qpair failed and we were unable to recover it. 00:34:24.845 [2024-07-14 14:10:02.579971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.579998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.580143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.580169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.580261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.580288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.580429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.580458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.580576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.580602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.580704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.580730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.580845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.580902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.581016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.581041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.581135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.581160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.581290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.581318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.581454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.581479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.581587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.581612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.581713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.581741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.581886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.581911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.582050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.582075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.582205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.582233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.582332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.582358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.582447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.582477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.582616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.582644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.582802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.582827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.582932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.582957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.583073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.583098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.583181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.583206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.583324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.583349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.583456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.583484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.583622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.583646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.583737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.583763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.583898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.583945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.584066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.584093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.584211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.584238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.584379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.584408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.584577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.584603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.584720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.584761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.584864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.584899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.585006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.585031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.585171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.585196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.585292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.585320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.585438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.585462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.585576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.585600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.585677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.585719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.585818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.846 [2024-07-14 14:10:02.585843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.846 qpair failed and we were unable to recover it. 00:34:24.846 [2024-07-14 14:10:02.586002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.586027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.586142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.586166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.586335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.586360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.586464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.586496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.586642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.586673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.586815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.586841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.586966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.586993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.587108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.587134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.587256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.587282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.587428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.587454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.587593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.587637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.587783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.587808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.587919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.587945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.588062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.588087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.588228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.588253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.588336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.588361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.588537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.588584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.588723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.588748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.588890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.588915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.589028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.589053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.589142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.589167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.589247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.589271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.589396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.589424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.589561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.589586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.589704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.589729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.589811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.589839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.589969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.589995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.590084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.590110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.590288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.590314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.590432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.590459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.590555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.590599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.590740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.590769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.590902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.590928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.591016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.591041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.591158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.591201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.591360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.591385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.591497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.591538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.591667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.591696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.591812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.591837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.591983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.592009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.592099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.847 [2024-07-14 14:10:02.592124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.847 qpair failed and we were unable to recover it. 00:34:24.847 [2024-07-14 14:10:02.592286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.592310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.592447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.592472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.592603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.592631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.592751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.592776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.592917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.592943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.593063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.593088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.593179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.593204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.593290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.593314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.593399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.593424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.593511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.593537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.593667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.593706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.593899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.593957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.594064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.594091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.594213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.594238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.594349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.594391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.594498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.594522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.594639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.594668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.594802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.594828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.594955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.594981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.595090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.595116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.595239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.595266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.595351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.595393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.595508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.595532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.595634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.595663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.595814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.595838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.595927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.595954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.596103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.596134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.596292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.596318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.596410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.596436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.596537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.596565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.596676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.596702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.596871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.596907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.597015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.597041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.597160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.597186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.597280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.597306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.597482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.597508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.597628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.597654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.597763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.597789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.597884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.597910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.848 [2024-07-14 14:10:02.598023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.848 [2024-07-14 14:10:02.598050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.848 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.598137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.598164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.598270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.598297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.598433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.598458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.598577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.598602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.598737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.598775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.598909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.598936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.599079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.599104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.599203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.599253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.599370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.599411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.599526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.599554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.599703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.599732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.599874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.599905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.600030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.600056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.600173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.600203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.600341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.600365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.600481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.600505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.600646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.600679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.600811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.600835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.600951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.600977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.601110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.601135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.601280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.601305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.601412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.601436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.601567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.601609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.601790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.601819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.601957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.601984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.602100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.602127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.602244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.602270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.602365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.602390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.602500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.602526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.602669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.602694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.602786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.602811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.602950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.602976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.603118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.603143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.603277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.603304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.603401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.603429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.603565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.603590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.849 qpair failed and we were unable to recover it. 00:34:24.849 [2024-07-14 14:10:02.603733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.849 [2024-07-14 14:10:02.603773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.603865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.603898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.604058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.604083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.604238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.604266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.604416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.604466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.604611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.604636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.604746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.604771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.604897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.604926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.605039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.605064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.605150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.605175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.605265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.605290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.605377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.605402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.605514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.605539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.605720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.605764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.605888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.605916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.606036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.606063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.606187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.606217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.606356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.606382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.606490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.606528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.606699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.606728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.606823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.606864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.606972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.606997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.607087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.607112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.607227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.607251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.607369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.607394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.607559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.607584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.607723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.607748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.607842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.607871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.607997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.608024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.608133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.608159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.608251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.608277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.608412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.608441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.608582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.608608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.608703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.608729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.608856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.608911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.609039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.609066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.609184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.609208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.609411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.850 [2024-07-14 14:10:02.609440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.850 qpair failed and we were unable to recover it. 00:34:24.850 [2024-07-14 14:10:02.609551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.609577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.609720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.609745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.609858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.609892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.610005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.610030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.610147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.610172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.610274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.610302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.610469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.610493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.610614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.610658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.610754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.610783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.610947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.610973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.611070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.611097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.611232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.611261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.611401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.611427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.611520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.611545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.611641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.611670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.611806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.611832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.611977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.612004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.612120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.612146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.612275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.612300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.612422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.612449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.612565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.612591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.612743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.612772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.612946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.612974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.613075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.613101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.613216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.613241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.613351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.613376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.613517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.613544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.613658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.613683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.613800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.613824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.613958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.613997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.614086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.614113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.614228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.614253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.614377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.614401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.614544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.614569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.614660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.614686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.614824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.614852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.614974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.615005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.615093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.615118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.615228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.615256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.615388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.851 [2024-07-14 14:10:02.615413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.851 qpair failed and we were unable to recover it. 00:34:24.851 [2024-07-14 14:10:02.615501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.615525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.615638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.615663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.615814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.615839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.615962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.615987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.616104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.616132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.616316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.616342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.616436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.616462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.616580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.616606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.616724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.616748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.616832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.616857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.616959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.616985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.617067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.617092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.617211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.617235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.617355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.617383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.617492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.617517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.617674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.617728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.617870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.617906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.618042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.618067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.618160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.618184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.618300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.618324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.618439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.618463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.618578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.618603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.618721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.618750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.618891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.618934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.619056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.619083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.619199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.619224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.619339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.619366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.619498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.619526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.619664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.619706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.619803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.619831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.619941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.619966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.620054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.620079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.620172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.620197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.620324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.620363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.620484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.620511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.620617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.620644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.620740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.620767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.620885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.620928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.621045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.621069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.621181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.621206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.621324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.852 [2024-07-14 14:10:02.621349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.852 qpair failed and we were unable to recover it. 00:34:24.852 [2024-07-14 14:10:02.621490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.621518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.621675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.621702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.621797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.621825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.621994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.622019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.622106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.622130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.622242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.622266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.622424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.622452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.622641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.622669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.622793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.622821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.622951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.622995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.623102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.623141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.623301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.623350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.623535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.623585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.623674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.623700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.623803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.623841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.623959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.623986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.624075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.624100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.624212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.624239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.624388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.624435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.624586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.624631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.624755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.624780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.624900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.624925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.625025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.625055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.625176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.625220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.625323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.625352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.625503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.625551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.625675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.625704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.625819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.625852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.625984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.626009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.626100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.626128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.626240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.626291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.626441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.626488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.626623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.626670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.626802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.626828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.626987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.627030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.627136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.627167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.627353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.627409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.627506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.627533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.627630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.627661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.627761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.627789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.853 qpair failed and we were unable to recover it. 00:34:24.853 [2024-07-14 14:10:02.627897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.853 [2024-07-14 14:10:02.627925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.628086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.628129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.628257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.628300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.628451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.628499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.628683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.628730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.628911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.628938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.629031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.629057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.629197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.629240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.629352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.629396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.629587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.629635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.629743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.629770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.629894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.629923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.630023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.630050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.630147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.630175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.630299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.630327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.630474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.630504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.630612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.630638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.630726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.630753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.630870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.630901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.631001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.631027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.631180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.631219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.631343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.631369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.631465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.631489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.631606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.631632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.631751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.631777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.631894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.631921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.632034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.632059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.632145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.632171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.632290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.632316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.632437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.632463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.632603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.632628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.632737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.632761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.632866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.632897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.633013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.633043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.633148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.633191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.854 qpair failed and we were unable to recover it. 00:34:24.854 [2024-07-14 14:10:02.633366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.854 [2024-07-14 14:10:02.633411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.633524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.633567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.633708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.633734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.633867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.633901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.634009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.634034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.634165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.634208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.634326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.634352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.634439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.634465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.634558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.634584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.634704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.634733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.634835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.634873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.634979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.635006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.635095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.635140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.635260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.635289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.635386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.635415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.635523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.635553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.635644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.635672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.635813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.635840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.636013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.636056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.636190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.636219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.636343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.636371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.636460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.636488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.636649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.636699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.636813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.636838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.636930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.636955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.637068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.637092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.637186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.637228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.637337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.637378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.637495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.637550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.637682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.637713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.637873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.637906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.638049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.638075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.638181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.638209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.638320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.638360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.638488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.638516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.638650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.638680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.638811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.638837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.638939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.638965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.639085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.639110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.639198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.639222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.639337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.855 [2024-07-14 14:10:02.639362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.855 qpair failed and we were unable to recover it. 00:34:24.855 [2024-07-14 14:10:02.639471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.639497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.639660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.639687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.639816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.639843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.639959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.639985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.640143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.640182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.640326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.640356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.640537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.640581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.640742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.640810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.640908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.640934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.641100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.641143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.641278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.641306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.641428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.641471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.641635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.641679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.641791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.641816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.641957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.642002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.642115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.642147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.642280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.642310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.642465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.642494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.642650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.642702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.642844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.642869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.642991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.643018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.643169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.643206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.643352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.643380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.643501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.643528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.643626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.643653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.643778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.643807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.643949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.643977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.644096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.644128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.644243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.644285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.644414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.644442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.644561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.644602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.644706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.644735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.644862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.644898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.645013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.645057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.645185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.645214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.645341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.645369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.645464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.645492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.645612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.645640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.645734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.645765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.645889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.645929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.856 [2024-07-14 14:10:02.646060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.856 [2024-07-14 14:10:02.646087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.856 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.646198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.646242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.646359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.646401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.646490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.646518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.646663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.646689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.646842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.646887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.647021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.647059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.647181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.647207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.647367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.647417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.647567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.647616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.647713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.647744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.647910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.647937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.648057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.648082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.648211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.648255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.648407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.648462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.648571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.648614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.648715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.648741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.648909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.648935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.649052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.649077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.649182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.649223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.649392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.649441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.649542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.649569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.649717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.649745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.649845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.649872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.649994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.650019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.650157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.650186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.650286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.650314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.650418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.650446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.650609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.650640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.650757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.650796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.650892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.650920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.651007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.651051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.651178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.651207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.651328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.651357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.651451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.651494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.651624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.651670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.651787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.651813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.651970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.652014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.652151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.652180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.652335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.652373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.652505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.652530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.857 qpair failed and we were unable to recover it. 00:34:24.857 [2024-07-14 14:10:02.652629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.857 [2024-07-14 14:10:02.652655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.652772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.652797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.652914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.652960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.653081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.653110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.653264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.653293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.653418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.653447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.653607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.653652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.653741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.653767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.653886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.653913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.654017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.654045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.654178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.654205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.654337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.654366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.654470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.654496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.654610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.654640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.654772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.654811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.654935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.654963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.655081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.655107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.655222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.655248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.655365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.655391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.655484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.655511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.655670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.655698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.655811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.655849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.655986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.656025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.656148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.656192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.656286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.656314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.656445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.656492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.656610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.656638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.656783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.656810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.656930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.656958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.657049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.657093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.657227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.657272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.657459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.657510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.657677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.657731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.657895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.657940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.858 qpair failed and we were unable to recover it. 00:34:24.858 [2024-07-14 14:10:02.658085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.858 [2024-07-14 14:10:02.658127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.658232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.658260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.658362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.658390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.658579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.658626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.658754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.658779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.658893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.658920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.659063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.659094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.659268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.659313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.659451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.659479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.659647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.659693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.659892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.659918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.660005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.660031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.660164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.660209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.660351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.660398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.660573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.660620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.660709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.660734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.660857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.660892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.661015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.661040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.661185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.661213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.661373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.661422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.661578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.661627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.661726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.661754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.661933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.661959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.662103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.662128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.662265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.662292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.662402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.662427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.662596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.662623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.662769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.662813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.662953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.662992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.663125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.663163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.663337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.663392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.663520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.663569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.663678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.663719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.663836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.663885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.664034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.664060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.664172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.664200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.664297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.664325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.664444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.664493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.664618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.859 [2024-07-14 14:10:02.664646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.859 qpair failed and we were unable to recover it. 00:34:24.859 [2024-07-14 14:10:02.664798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.664826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.664965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.664991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.665081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.665106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.665187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.665229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.665352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.665379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.665476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.665503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.665642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.665669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.665760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.665788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.665902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.665928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.666020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.666045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.666133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.666175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.666326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.666354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.666506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.666534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.666657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.666685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.666864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.666910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.667046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.667085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.667227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.667256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.667409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.667457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.667564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.667590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.667766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.667794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.667938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.667965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.668048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.668077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.668171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.668212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.668361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.668407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.668559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.668607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.668751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.668794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.668905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.668950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.669096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.669122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.669286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.669315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.669461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.669514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.669626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.669674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.669840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.669890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.670042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.670069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.670162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.670188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.860 [2024-07-14 14:10:02.670272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.860 [2024-07-14 14:10:02.670296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.860 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.670431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.670458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.670611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.670660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.670787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.670816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.670974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.670999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.671109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.671133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.671239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.671264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.671405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.671452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.671580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.671607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.671750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.671792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.671910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.671966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.672061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.672088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.672196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.672221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.672333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.672360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.672485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.672526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.672680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.672708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.672801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.672829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.672941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.672967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.673107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.673132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.673240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.673268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.673381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.673421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.673520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.673548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.673684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.673726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.673859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.673904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.674028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.674054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.674146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.674173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.674377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.674419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.674571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.674620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.674784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.674814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.674953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.674979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.675064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.675089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.675182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.675207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.675347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.675394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.675500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.675524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.675654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.675682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.675833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.675883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.676035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.676073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.676217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.676250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.676406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.676436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.676564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.676594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.676751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.676780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.861 [2024-07-14 14:10:02.676909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.861 [2024-07-14 14:10:02.676953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.861 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.677044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.677069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.677184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.677225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.677369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.677416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.677610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.677662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.677794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.677819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.677932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.677958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.678036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.678061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.678171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.678196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.678311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.678336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.678447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.678476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.678593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.678635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.678764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.678796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.678918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.678979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.679103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.679128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.679235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.679264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.679412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.679440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.679563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.679590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.679711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.679739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.679888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.679946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.680037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.680064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.680150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.680176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.680316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.680346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.680511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.680562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.680713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.680742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.680887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.680932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.681145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.681184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.681297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.681342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.681431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.681456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.681595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.681643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.681724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.681750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.681888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.681932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.682069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.682112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.682286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.682314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.682467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.682515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.682645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.682693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.682846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.682873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.683015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.683040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.683130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.683171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.683308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.683336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.683491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.683544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.862 qpair failed and we were unable to recover it. 00:34:24.862 [2024-07-14 14:10:02.683681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.862 [2024-07-14 14:10:02.683724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.683821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.683864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.683960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.683985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.684074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.684098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.684237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.684265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.684404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.684431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.684555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.684582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.684712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.684739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.684833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.684861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.684977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.685002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.685091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.685116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.685265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.685306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.685398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.685426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.685530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.685557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.685680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.685708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.685829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.685857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.686005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.686030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.686119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.686144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.686255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.686283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.686371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.686398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.686549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.686577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.686680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.686707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.686844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.686889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.686992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.687019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.687106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.687132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.687294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.687338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.687478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.687528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.687614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.687639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.687757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.687783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.687906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.687932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.688031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.688056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.688157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.688182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.688283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.688322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.688427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.688457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.863 [2024-07-14 14:10:02.688582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.863 [2024-07-14 14:10:02.688610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.863 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.688704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.688730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.688816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.688840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.688959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.688989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.689096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.689125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.689237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.689261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.689386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.689426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.689592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.689621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.689713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.689740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.689866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.689904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.690026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.690058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.690171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.690199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.690349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.690378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.690476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.690503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.690612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.690642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.690755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.690784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.690908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.690964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.691062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.691089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.691205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.691251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.691399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cee390 is same with the state(5) to be set 00:34:24.864 [2024-07-14 14:10:02.691530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.691561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.691689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.691717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.691821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.691851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.691974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.692001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.692134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.692190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.692297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.692327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.692419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.692448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.692547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.692577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.692728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.692770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.692918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.692946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.693033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.693058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.693175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.693218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.693340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.693368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.693497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.693525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.693628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.693659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.693765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.693794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.693897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.693941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.694034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.694060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.694176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.694202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.694344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.694373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.694496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.694525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.864 [2024-07-14 14:10:02.694628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.864 [2024-07-14 14:10:02.694657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.864 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.694761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.694789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.694932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.694959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.695073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.695097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.695179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.695204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.695318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.695350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.695458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.695486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.695602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.695626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.695737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.695765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.695896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.695921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.696014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.696039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.696124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.696149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.696323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.696352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.696462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.696489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.696617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.696649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.696755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.696785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.696914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.696957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.697052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.697079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.697197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.697223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.697344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.697371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.697513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.697542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.697662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.697706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.697830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.697858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.697973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.697998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.698115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.698140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.698223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.698248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.698344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.698371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.698497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.698525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.698644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.698672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.698798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.698829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.698981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.699007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.699125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.699168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.699264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.699298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.699444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.699472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.699592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.699621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.699726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.699755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.699908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.699947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.700042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.700069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.700161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.700186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.700288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.700317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.700420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.700446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.865 [2024-07-14 14:10:02.700529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.865 [2024-07-14 14:10:02.700554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.865 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.700671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.700697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.700791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.700816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.700938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.700966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.701054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.701080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.701181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.701207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.701324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.701350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.701458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.701488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.701638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.701683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.701777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.701803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.701897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.701924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.702013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.702040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.702136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.702163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.702284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.702312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.702430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.702456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.702569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.702594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.702675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.702701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.702788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.702814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.702950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.702989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.703135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.703165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.703317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.703362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.703495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.703523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.703687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.703712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.703827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.703854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.704001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.704032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.704167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.704198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.704288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.704317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.704444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.704471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.704569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.704597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.704722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.704750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.704898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.704934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.705046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.705090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.705261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.705305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.705409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.705438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.705592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.705636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.705723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.705749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.705859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.705894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.706000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.706029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.706160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.706188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.706340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.706389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.706511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.706539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.866 qpair failed and we were unable to recover it. 00:34:24.866 [2024-07-14 14:10:02.706635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.866 [2024-07-14 14:10:02.706664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.706818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.706845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.706979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.707004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.707108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.707137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.707294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.707340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.707469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.707512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.707660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.707686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.707803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.707829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.707957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.707983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.708069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.708094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.708208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.708233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.708325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.708352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.708439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.708466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.708561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.708588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.708702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.708728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.708853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.708885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.709010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.709035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.709117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.709147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.709232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.709257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.709368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.709395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.709493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.709522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.709642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.709670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.709806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.709831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.709930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.709957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.710054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.710079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.710209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.710237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.710329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.710358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.710451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.710478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.710564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.710592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.710765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.710809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.710942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.710971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.711144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.711191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.711299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.711343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.711485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.711528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.711633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.711661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.867 qpair failed and we were unable to recover it. 00:34:24.867 [2024-07-14 14:10:02.711802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.867 [2024-07-14 14:10:02.711829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.711922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.711947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.712062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.712087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.712200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.712227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.712323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.712351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.712442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.712469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.712579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.712622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.712764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.712789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.712888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.712917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.713011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.713042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.713141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.713184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.713319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.713348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.713442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.713471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.713611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.713654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.713769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.713797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.713886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.713916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.714007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.714033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.714126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.714155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.714273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.714316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.714413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.714442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.714536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.714564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.714701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.714729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.714828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.714856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.714992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.715031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.715188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.715231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.715338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.715368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.715480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.715506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.715719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.715768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.715899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.715944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.716037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.716064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.716173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.716204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.716302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.716331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.716451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.716480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.716647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.716697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.716821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.716848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.716953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.716979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.717095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.717138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.717313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.717365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.717486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.717541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.717635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.717677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.868 qpair failed and we were unable to recover it. 00:34:24.868 [2024-07-14 14:10:02.717767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.868 [2024-07-14 14:10:02.717795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.717908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.717946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.718072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.718098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.718211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.718236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.718362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.718387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.718505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.718551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.718667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.718693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.718784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.718811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.718938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.718964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.719061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.719089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.719217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.719243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.719361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.719387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.719476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.719501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.719584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.719609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.719693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.719717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.719801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.719827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.719918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.719947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.720034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.720059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.720177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.720202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.720283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.720309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.720436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.720463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.720589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.720615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.720705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.720729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.720824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.720849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.720952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.720978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.721059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.721084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.721198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.721226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.721344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.721372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.721494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.721521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.721615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.721642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.721739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.721767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.721920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.721945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.722060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.722085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.722211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.722257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.722383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.722412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.722519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.722562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.722672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.722705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.722803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.722833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.722957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.722984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.723124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.723165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.723264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.869 [2024-07-14 14:10:02.723292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.869 qpair failed and we were unable to recover it. 00:34:24.869 [2024-07-14 14:10:02.723404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.723429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.723569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.723596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.723695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.723723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.723957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.723996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.724090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.724117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.724218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.724244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.724382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.724419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.724550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.724593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.724685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.724714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.724847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.724883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.725024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.725049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.725135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.725160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.725268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.725296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.725426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.725473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.725594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.725622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.725708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.725736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.725913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.725952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.726077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.726104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.726207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.726236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.726383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.726426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.726560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.726588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.726697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.726725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.726841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.726873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.726979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.727004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.727104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.727132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.727261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.727289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.727409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.727459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.727557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.727598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.727684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.727709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.727817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.727842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.727965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.727990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.728076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.728101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.728200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.728228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.728322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.728349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.728444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.728472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.728563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.728591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.728690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.728717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.728856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.728925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.729049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.729075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.729194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.729222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.729337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.870 [2024-07-14 14:10:02.729364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.870 qpair failed and we were unable to recover it. 00:34:24.870 [2024-07-14 14:10:02.729510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.729538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.729653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.729697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.729799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.729828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.730003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.730028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.730123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.730148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.730235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.730260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.730424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.730452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.730556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.730581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.730727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.730760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.730914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.730940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.731058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.731084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.731191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.731218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.731311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.731339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.731438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.731466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.731618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.731646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.731743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.731771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.731885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.731911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.732036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.732064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.732254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.732280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.732392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.732421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.732522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.732551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.732647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.732676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.732796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.732823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.732913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.732940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.733042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.733066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.733172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.733197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.733311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.733336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.733470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.733497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.733610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.733652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.733781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.733808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.733929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.733956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.734044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.734069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.734163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.734204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.734329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.734356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.734451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.734479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.871 [2024-07-14 14:10:02.734580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.871 [2024-07-14 14:10:02.734612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.871 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.734737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.734765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.734909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.734954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.735081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.735109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.735214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.735243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.735353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.735382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.735484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.735514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.735646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.735675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.735800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.735829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.735978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.736007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.736098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.736124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.736228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.736257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.736405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.736448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.736537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.736563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.736687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.736715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.736809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.736835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.736941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.736967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.737052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.737077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.737168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.737193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.737274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.737300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.737386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.737411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.737525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.737552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.737638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.737665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.737763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.737789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.737886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.737913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.738028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.738055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.738188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.738216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.738346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.738380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.738515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.738543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.738634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.738661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.738777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.738802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.738886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.738911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.738990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.739015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.739094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.739119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.739217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.739242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.739323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.739348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.739456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.739503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.739602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.739630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.739735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.739763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.872 [2024-07-14 14:10:02.739859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.872 [2024-07-14 14:10:02.739891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.872 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.740007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.740032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.740138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.740168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.740300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.740329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.740429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.740458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.740554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.740583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.740726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.740755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.740865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.740901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.741023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.741048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.741156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.741181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.741355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.741398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.741570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.741598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.741688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.741716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.741817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.741845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.742004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.742042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.742200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.742251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.742362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.742391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.742570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.742613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.742757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.742783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.742917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.742947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.743105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.743153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.743268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.743311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.743423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.743466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.743594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.743620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.743752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.743790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.743910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.743941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.744071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.744100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.744195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.744224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.744332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.744361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.744505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.744536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.744638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.744668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.744769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.744797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.744928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.744953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.745073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.745098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.745210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.745238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.745353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.745396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.745494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.745522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.745648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.745676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.745805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.745830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.745937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.745966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.746060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.746086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.873 [2024-07-14 14:10:02.746201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.873 [2024-07-14 14:10:02.746230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.873 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.746383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.746411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.746511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.746539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.746708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.746765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.746901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.746928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.747044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.747070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.747199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.747228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.747336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.747362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.747532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.747561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.747690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.747718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.747867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.747912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.748005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.748033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.748211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.748240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.748390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.748434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.748632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.748681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.748802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.748827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.748940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.748967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.749108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.749150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.749259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.749302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.749413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.749441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.749548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.749574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.749664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.749691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.749783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.749808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.749908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.749934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.750021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.750046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.750135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.750160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.750243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.750268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.750378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.750406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.750561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.750590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.750678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.750706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.750817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.750844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.750974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.751002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.751122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.751158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.751289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.751319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.751448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.751477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.751602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.751632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.751753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.751781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.751874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.751924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.752064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.752092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.752218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.752246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.752369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.874 [2024-07-14 14:10:02.752397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.874 qpair failed and we were unable to recover it. 00:34:24.874 [2024-07-14 14:10:02.752496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.752528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.752659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.752689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.752843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.752872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.753015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.753041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.753159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.753185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.753287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.753316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.753436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.753481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.753576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.753606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.753730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.753760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.753897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.753934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.754017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.754043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.754177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.754206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.754311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.754341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.754472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.754500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.754639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.754668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.754777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.754804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.754933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.754972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.755067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.755094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.755203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.755242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.755417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.755464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.755571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.755615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.755736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.755762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.755863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.755898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.755993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.756019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.756103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.756129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.756252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.756278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.756447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.756476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.756586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.756615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.756713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.756742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.756864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.756900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.757032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.757058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.757141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.757167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.757247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.757273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.757406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.757435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.757551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.757595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.757695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.757724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.757864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.757931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.758072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.758102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.758232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.758261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.758359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.758387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.875 [2024-07-14 14:10:02.758490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.875 [2024-07-14 14:10:02.758518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.875 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.758645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.758673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.758797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.758827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.758941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.758968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.759113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.759140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.759279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.759309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.759424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.759467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.759621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.759650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.759742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.759770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.759913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.759940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.760038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.760064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.760150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.760194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.760383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.760412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.760507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.760536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.760657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.760699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.760817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.760843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.760943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.760969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.761059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.761086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.761185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.761224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.761337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.761369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.761542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.761585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.761696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.761725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.761854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.761891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.762010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.762035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.762157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.762203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.762331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.762360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.762487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.762517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.876 qpair failed and we were unable to recover it. 00:34:24.876 [2024-07-14 14:10:02.762629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.876 [2024-07-14 14:10:02.762663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.762767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.762795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.762915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.762942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.763033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.763060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.763164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.763194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.763346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.763375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.763471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.763499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.763624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.763653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.763805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.763844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.763953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.763981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.764098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.764127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.764283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.764313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.764461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.764490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.764643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.764669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.764764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.764790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.764874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.764908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.765023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.765049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.765140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.765165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.765265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.765304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.765445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.765483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.765586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.765614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.765700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.765727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.765816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.765841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.765978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.766004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.766089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.766114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.766242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.766267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.766387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.766412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.766533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.766561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.766701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.766727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.766837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.766863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.766980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.767009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.767112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.767141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.767238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.767267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.767380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.767408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.767546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.767590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.767705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.767731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.767822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.767848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.767998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.768042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.768131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.768157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.768261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.768290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.768403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.768433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.768519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.768545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.768657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.768683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.768782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.768821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.768934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.768973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.769068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.769094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.769190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.769216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.769309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.769335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.769429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.769458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.769607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.877 [2024-07-14 14:10:02.769659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.877 qpair failed and we were unable to recover it. 00:34:24.877 [2024-07-14 14:10:02.769826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.769869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.770030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.770057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.770197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.770226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.770352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.770380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.770481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.770509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.770687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.770729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.770816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.770841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.770942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.770969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.771064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.771089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.771204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.771229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.771341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.771369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.771501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.771543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.771653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.771697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.771808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.771840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.771972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.772011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.772112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.772139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.772256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.772301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.772413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.772449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.772598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.772655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.772771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.772798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.772933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.772962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.773056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.773082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.773168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.773194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.773283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.773308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.773396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.773421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.773535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.773560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.773672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.773702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.773824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.773855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.773983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.774011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.774115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.774145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.774244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.774273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.774377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.774408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.774505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.774534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.774658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.774688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.774843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.774870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.775017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.775042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.775164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.775206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.775342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.775367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.775536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.775564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.775714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.775742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.775838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.878 [2024-07-14 14:10:02.775865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.878 qpair failed and we were unable to recover it. 00:34:24.878 [2024-07-14 14:10:02.776006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.776031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.776139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.776164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.776331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.776359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.776538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.776571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.776673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.776701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.776788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.776816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.776943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.776969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.777062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.777090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.777223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.777252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.777455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.777483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.777578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.777607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.777698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.777727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.777837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.777864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.778000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.778026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.778188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.778216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.778402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.778430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.778524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.778552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.778680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.778707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.778841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.778866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.778967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.778992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.779082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.779107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.779206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.779232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.779319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.779344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.779477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.779505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.779623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.779651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.779768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.779812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.779964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.779993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.780078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.780105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.780217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.780243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.780340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.780370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.780507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.780537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.780659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.780689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.780783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.780811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.780953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.780979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.781061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.781086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.781187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.781215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.781370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.781398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.781504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.781532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.781659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.781687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.781852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.781900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.781995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.879 [2024-07-14 14:10:02.782023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.879 qpair failed and we were unable to recover it. 00:34:24.879 [2024-07-14 14:10:02.782114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.782140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.782288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.782316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.782421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.782465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.782573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.782604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.782705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.782735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.782856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.782904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.783026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.783053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.783176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.783203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.783321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.783368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.783495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.783544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.783640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.783666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.783761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.783788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.783901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.783926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.784046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.784071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.784162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.784187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.784276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.784318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.784417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.784448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.784636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.784665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.784760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.784789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.784937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.784964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.785054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.785080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.785202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.785228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.785354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.785383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.785489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.785514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.785656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.785696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.785830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.785858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.785974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.785999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.786085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.786128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.786221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.786249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.786381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.786413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.786511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.786539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.786638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.786666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.786770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.786795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.786887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.880 [2024-07-14 14:10:02.786913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.880 qpair failed and we were unable to recover it. 00:34:24.880 [2024-07-14 14:10:02.786994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.787019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.787119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.787158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.787306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.787350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.787457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.787487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.787616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.787643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.787775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.787801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.787902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.787929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.788017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.788044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.788133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.788159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.788266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.788295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.788422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.788449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.788543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.788569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.788652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.788677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.788762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.788787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.788904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.788930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.789033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.789061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.789158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.789185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.789273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.789301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:24.881 [2024-07-14 14:10:02.789405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.881 [2024-07-14 14:10:02.789433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:24.881 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.789573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.789605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.789712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.789741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.789866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.789904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.790055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.790090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.790226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.790255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.790343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.790372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.790484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.790527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.790683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.790727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.790859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.790893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.790978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.791004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.791109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.791138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.791281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.791307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.791428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.791453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.791544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.791570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.791684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.791710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.791811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.791839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.791966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.791993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.792095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.792123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.792217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.792244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.792360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.792386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.792470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.792495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.792595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.792638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.792780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.792806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.792925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.792952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.793061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.793089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.793220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.793248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.793342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.793370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.793499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.793530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.793684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.793728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.793817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.793842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.793960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.794008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.794097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.794123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.794233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.794259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.794350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.168 [2024-07-14 14:10:02.794376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.168 qpair failed and we were unable to recover it. 00:34:25.168 [2024-07-14 14:10:02.794463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.794488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.794574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.794601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.794683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.794714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.794796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.794822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.794928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.794953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.795070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.795095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.795180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.795206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.795323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.795348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.795452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.795480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.795597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.795625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.795720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.795748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.795890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.795916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.796010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.796036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.796120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.796145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.796290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.796318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.796479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.796507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.796598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.796626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.796758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.796797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.796952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.796979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.797132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.797175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.797336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.797365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.797485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.797513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.797610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.797638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.797732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.797766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.797861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.797898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.798005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.798030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.798147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.798172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.798273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.798300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.798423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.798451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.798550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.798577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.798673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.798701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.798885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.798924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.799024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.799053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.799152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.799178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.799263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.799290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.799415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.799444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.799565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.799594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.799693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.799722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.799840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.799869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.799984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.800010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.169 qpair failed and we were unable to recover it. 00:34:25.169 [2024-07-14 14:10:02.800103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.169 [2024-07-14 14:10:02.800130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.800218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.800244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.800346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.800374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.800516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.800561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.800653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.800679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.800791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.800816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.800911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.800937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.801028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.801053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.801144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.801168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.801286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.801311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.801454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.801502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.801635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.801678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.801797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.801823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.801960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.802006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.802147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.802190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.802301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.802343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.802454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.802497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.802580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.802605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.802699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.802724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.802811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.802837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.802931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.802956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.803043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.803068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.803212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.803239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.803356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.803384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.803497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.803541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.803650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.803680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.803783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.803812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.803916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.803961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.804056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.804083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.804234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.804276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.804413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.804442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.804600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.804628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.804756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.804784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.804899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.804942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.805060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.805087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.805176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.805201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.805284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.805309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.805420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.805452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.805569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.805613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.805767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.805795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.170 qpair failed and we were unable to recover it. 00:34:25.170 [2024-07-14 14:10:02.805940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.170 [2024-07-14 14:10:02.805967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.806063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.806089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.806227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.806256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.806395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.806424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.806527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.806555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.806647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.806689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.806829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.806857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.806978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.807003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.807116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.807142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.807289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.807317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.807508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.807540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.807647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.807675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.807801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.807832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.807954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.807980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.808095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.808122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.808230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.808259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.808378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.808421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.808553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.808581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.808707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.808735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.808844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.808869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.808980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.809005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.809097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.809122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.809247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.809273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.809366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.809407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.809510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.809537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.809660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.809688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.809777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.809805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.809920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.809945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.810059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.810084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.810170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.810214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.810320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.810345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.810451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.810479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.810573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.810600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.810700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.810727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.810864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.810896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.811020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.811045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.811133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.811178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.811286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.811327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.811462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.811490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.811610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.811638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.811765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.171 [2024-07-14 14:10:02.811804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.171 qpair failed and we were unable to recover it. 00:34:25.171 [2024-07-14 14:10:02.811912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.811943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.812051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.812090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.812232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.812275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.812407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.812436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.812533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.812564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.812667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.812696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.812873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.812920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.813043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.813070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.813161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.813188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.813302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.813331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.813462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.813510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.813651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.813699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.813825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.813851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.813981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.814011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.814105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.814133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.814262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.814291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.814392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.814421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.814542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.814591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.814739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.814770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.814893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.814932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.815083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.815110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.815266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.815294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.815481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.815509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.815654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.815711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.815825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.815850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.815979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.816004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.816090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.816115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.816249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.816276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.816401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.816428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.816526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.816555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.816679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.816710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.816893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.816932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.172 qpair failed and we were unable to recover it. 00:34:25.172 [2024-07-14 14:10:02.817052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.172 [2024-07-14 14:10:02.817083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.817233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.817277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.817390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.817436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.817551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.817595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.817684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.817711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.817832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.817858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.817956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.817982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.818111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.818140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.818256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.818282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.818396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.818421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.818553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.818581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.818704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.818732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.818834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.818861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.818993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.819022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.819132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.819176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.819313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.819356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.819484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.819513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.819664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.819702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.819802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.819835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.819944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.819971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.820087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.820116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.820252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.820282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.820376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.820406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.820515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.820559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.820670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.820696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.820783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.820809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.820939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.820966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.821055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.821082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.821178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.821204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.821293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.821319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.821442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.821469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.821618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.821645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.821740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.821767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.821854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.821892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.821992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.822017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.822111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.822135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.822226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.822251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.822377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.822404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.822493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.822521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.822634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.822660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.822778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.173 [2024-07-14 14:10:02.822804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.173 qpair failed and we were unable to recover it. 00:34:25.173 [2024-07-14 14:10:02.822909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.822936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.823023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.823049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.823188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.823217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.823349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.823392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.823507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.823551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.823656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.823685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.823799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.823824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.823964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.823990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.824091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.824115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.824222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.824263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.824377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.824407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.824516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.824543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.824642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.824669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.824802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.824830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.824966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.824991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.825085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.825110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.825223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.825250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.825376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.825403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.825511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.825539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.825660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.825687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.825811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.825839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.825951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.825977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.826098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.826126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.826280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.826306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.826446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.826475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.826599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.826628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.826728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.826757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.826892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.826922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.827009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.827035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.827121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.827163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.827276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.827317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.827447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.827478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.827601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.827630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.827756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.827795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.827888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.827924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.828008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.828033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.828121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.828163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.828271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.828296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.828401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.828429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.828573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.828598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.174 qpair failed and we were unable to recover it. 00:34:25.174 [2024-07-14 14:10:02.828711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.174 [2024-07-14 14:10:02.828739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.828863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.828900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.829011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.829036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.829122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.829147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.829263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.829288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.829401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.829433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.829551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.829594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.829699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.829728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.829831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.829860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.829992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.830030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.830150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.830195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.830331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.830375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.830511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.830540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.830670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.830695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.830817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.830842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.830955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.830985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.831086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.831111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.831194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.831220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.831347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.831375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.831519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.831545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.831639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.831664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.831765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.831791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.831872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.831904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.832063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.832092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.832192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.832221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.832318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.832347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.832474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.832504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.832635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.832665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.832763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.832792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.832903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.832970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.833075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.833102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.833219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.833250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.833369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.833398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.833515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.833543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.833634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.833662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.833802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.833832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.833981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.834008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.834132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.834158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.834277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.834307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.834434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.834462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.834558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.175 [2024-07-14 14:10:02.834586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.175 qpair failed and we were unable to recover it. 00:34:25.175 [2024-07-14 14:10:02.834701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.834727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.834869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.834902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.834989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.835015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.835133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.835177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.835329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.835355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.835450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.835476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.835593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.835622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.835740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.835767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.835861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.835894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.836013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.836039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.836133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.836160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.836257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.836283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.836378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.836406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.836519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.836547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.836643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.836672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.836777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.836806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.836939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.836979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.837081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.837112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.837230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.837260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.837387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.837415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.837580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.837609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.837726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.837767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.837886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.837938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.838034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.838059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.838158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.838183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.838288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.838316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.838437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.838481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.838604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.838636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.838739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.838770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.838937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.838976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.839076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.839108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.839253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.839297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.839411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.839455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.839590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.839628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.176 qpair failed and we were unable to recover it. 00:34:25.176 [2024-07-14 14:10:02.839754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.176 [2024-07-14 14:10:02.839781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.839904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.839939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.840028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.840054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.840147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.840173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.840266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.840293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.840414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.840441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.840586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.840612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.840697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.840722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.840827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.840856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.841024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.841069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.841181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.841207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.841310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.841354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.841442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.841468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.841599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.841637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.841735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.841762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.841894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.841922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.842019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.842045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.842132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.842165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.842257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.842282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.842369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.842396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.842511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.842536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.842617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.842643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.842756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.842783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.842917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.842956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.843065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.843105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.843229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.843256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.843374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.843401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.843513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.843539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.843643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.843671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.843796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.843823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.843942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.843980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.844083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.844109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.844228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.844254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.844403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.844431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.844553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.844602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.844691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.844720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.844823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.844849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.844960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.844989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.845080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.845106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.845213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.845242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.845365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.177 [2024-07-14 14:10:02.845394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.177 qpair failed and we were unable to recover it. 00:34:25.177 [2024-07-14 14:10:02.845492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.845521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.845614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.845644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.845767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.845796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.845898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.845947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.846044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.846069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.846164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.846189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.846290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.846318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.846447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.846475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.846578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.846609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.846720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.846750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.846900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.846940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.847044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.847071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.847234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.847279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.847394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.847440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.847526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.847552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.847668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.847694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.847784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.847811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.847911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.847945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.848036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.848061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.848161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.848189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.848281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.848308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.848406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.848434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.848542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.848574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.848691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.848717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.848798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.848824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.848948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.848993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.849129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.849173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.849320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.849345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.849442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.849467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.849557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.849583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.849688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.849727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.849856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.849898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.850016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.850042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.850146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.850174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.850272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.850300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.850387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.850415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.850541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.850569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.850717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.850761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.850898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.850927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.851057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.851086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.851180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.178 [2024-07-14 14:10:02.851210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.178 qpair failed and we were unable to recover it. 00:34:25.178 [2024-07-14 14:10:02.851310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.851340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.851466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.851496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.851592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.851622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.851729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.851758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.851896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.851925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.852023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.852049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.852132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.852158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.852252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.852280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.852428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.852459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.852549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.852574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.852661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.852687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.852801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.852827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.852948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.852974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.853077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.853105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.853198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.853239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.853370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.853397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.853548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.853576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.853671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.853702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.853831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.853861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.853979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.854026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.854178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.854222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.854359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.854403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.854545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.854589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.854702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.854728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.854822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.854847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.854962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.854992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.855111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.855139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.855270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.855298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.855422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.855450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.855550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.855578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.855683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.855712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.855833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.855861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.855977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.856004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.856139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.856168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.856318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.856363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.856486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.856536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.856677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.856703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.856790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.856817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.856932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.856959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.857044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.857086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.179 [2024-07-14 14:10:02.857209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.179 [2024-07-14 14:10:02.857237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.179 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.857342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.857369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.857520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.857548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.857645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.857673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.857788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.857813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.857923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.857949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.858040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.858065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.858174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.858202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.858299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.858327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.858425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.858453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.858586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.858618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.858722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.858747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.858869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.858901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.859016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.859059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.859193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.859241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.859375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.859420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.859543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.859569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.859663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.859689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.859802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.859827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.859917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.859943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.860061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.860086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.860170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.860195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.860279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.860310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.860480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.860526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.860641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.860667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.860780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.860805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.860895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.860922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.861054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.861098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.861211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.861255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.861394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.861439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.861581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.861607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.861701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.861728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.861845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.861870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.862026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.862055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.862197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.862231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.862353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.862381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.862486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.862514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.862687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.862731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.862816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.862842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.862963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.862989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.863101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.863146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.180 [2024-07-14 14:10:02.863277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.180 [2024-07-14 14:10:02.863321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.180 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.863445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.863489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.863612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.863638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.863770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.863796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.863895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.863921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.864015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.864041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.864160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.864186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.864272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.864297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.864415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.864442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.864562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.864588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.864679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.864704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.864792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.864817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.864916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.864943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.865038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.865064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.865179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.865204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.865322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.865348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.865464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.865490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.865580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.865607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.865725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.865750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.865842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.865867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.866011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.866039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.866163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.866195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.866327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.866355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.866482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.866508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.866588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.866613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.866699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.866724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.866813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.866838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.866922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.866948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.867062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.867087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.867220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.867247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.867339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.867367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.867500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.867528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.867684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.867715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.867820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.867846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.867974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.868000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.868174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.181 [2024-07-14 14:10:02.868218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.181 qpair failed and we were unable to recover it. 00:34:25.181 [2024-07-14 14:10:02.868354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.868398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.868528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.868557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.868690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.868716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.868828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.868853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.868939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.868965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.869096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.869121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.869236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.869262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.869372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.869397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.869504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.869533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.869711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.869740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.869869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.869902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.870007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.870037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.870155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.870190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.870336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.870380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.870508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.870535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.870675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.870700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.870812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.870837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.870966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.870992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.871076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.871101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.871238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.871266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.871425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.871453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.871610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.871638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.871794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.871822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.871985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.872011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.872127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.872152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.872287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.872315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.872448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.872476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.872603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.872632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.872762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.872791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.872902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.872928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.873017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.873042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.873146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.873172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.873275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.873303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.873405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.873432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.873528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.873557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.873719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.873748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.873870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.873908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.874017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.874043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.874129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.874154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.874307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.874347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.182 [2024-07-14 14:10:02.874517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.182 [2024-07-14 14:10:02.874545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.182 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.874671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.874699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.874821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.874848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.875014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.875040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.875120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.875145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.875257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.875282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.875378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.875406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.875533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.875561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.875685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.875712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.875807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.875834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.875960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.875985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.876108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.876133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.876274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.876301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.876458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.876486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.876606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.876634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.876752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.876779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.876940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.876966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.877082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.877107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.877261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.877289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.877479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.877506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.877658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.877685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.877816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.877844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.878001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.878026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.878139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.878164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.878276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.878301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.878464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.878492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.878616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.878644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.878774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.878802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.878909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.878934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.879021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.879046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.879159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.879184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.879290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.879318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.879449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.879476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.879596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.879623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.879747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.879775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.879900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.879941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.880048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.880073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.880183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.880208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.880321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.880347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.880513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.880541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.183 [2024-07-14 14:10:02.880697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.183 [2024-07-14 14:10:02.880729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.183 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.880859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.880896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.881023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.881048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.881189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.881214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.881326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.881368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.881500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.881528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.881723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.881751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.881883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.881927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.882018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.882043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.882136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.882161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.882289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.882317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.882470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.882498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.882630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.882658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.882779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.882807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.882917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.882959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.883080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.883105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.883201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.883226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.883339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.883366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.883503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.883545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.883634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.883661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.883771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.883798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.883928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.883954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.884070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.884095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.884233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.884261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.884392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.884420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.884564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.884607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.884770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.884798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.884948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.884977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.885129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.885170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.885270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.885313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.885446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.885475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.885615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.885643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.885754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.885782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.885917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.885943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.886027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.886052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.886144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.886169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.886286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.886311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.886424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.886448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.886624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.886652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.886786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.886828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.886975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.887001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.887087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.184 [2024-07-14 14:10:02.887113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.184 qpair failed and we were unable to recover it. 00:34:25.184 [2024-07-14 14:10:02.887232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.887257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.887371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.887398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.887545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.887573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.887738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.887763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.887920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.887948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.888104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.888131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.888249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.888274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.888368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.888393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.888544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.888572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.888733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.888758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.888881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.888907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.889023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.889048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.889134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.889163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.889295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.889320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.889450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.889478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.185 [2024-07-14 14:10:02.889608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.185 [2024-07-14 14:10:02.889633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.185 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.889746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.889771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.889913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.889942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.890080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.890106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.890225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.890250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.890369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.890396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.890532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.890558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.890678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.890703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.890858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.890890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.891043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.891068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.891183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.891208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.891324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.891352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.891492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.891517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.891661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.891702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.891791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.891819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.891960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.891986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.892107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.892133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.892234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.892262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.892382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.892407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.892521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.892546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.892698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.892726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.892837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.892861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.893023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.893048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.893210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.893250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.893338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.893362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.893506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.893531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.893676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.893703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.893812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.186 [2024-07-14 14:10:02.893837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.186 qpair failed and we were unable to recover it. 00:34:25.186 [2024-07-14 14:10:02.894041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.894067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.894165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.894209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.894343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.894368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.894485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.894511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.894631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.894658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.894857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.894890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.894981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.895022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.895157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.895184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.895323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.895348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.895463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.895488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.895704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.895732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.895902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.895928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.896068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.896096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.896240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.896265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.896408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.896433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.896521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.896546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.896763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.896791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.896894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.896920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.897027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.897052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.897141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.897166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.897287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.897312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.897455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.897498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.897625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.897653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.897840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.897867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.897984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.898009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.898160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.898188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.898331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.898357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.898442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.898466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.898573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.898598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.898686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.898710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.898825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.898850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.898957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.899000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.899122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.899147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.899291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.899316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.899452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.899480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.899589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.899614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.899723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.899748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.899888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.899934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.900076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.900101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.900194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.900219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.900374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.900402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.187 qpair failed and we were unable to recover it. 00:34:25.187 [2024-07-14 14:10:02.900514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.187 [2024-07-14 14:10:02.900539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.900657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.900682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.900905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.900937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.901078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.901104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.901225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.901250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.901448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.901475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.901639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.901664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.901805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.901848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.901983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.902012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.902116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.902140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.902261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.902286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.902404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.902432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.902569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.902594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.902733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.902758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.902899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.902928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.903048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.903073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.903284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.903311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.903438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.903466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.903576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.903601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.903744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.903769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.903950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.903976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.904093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.904118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.904236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.904261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.904374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.904403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.904525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.904550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.904664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.904689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.904774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.904799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.904995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.905020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.905196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.905221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.905311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.905336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.905474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.905499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.905620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.905661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.905786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.905814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.906015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.906041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.906158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.906183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.906297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.906322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.906434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.906459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.906593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.906634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.906727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.906755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.906899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.906930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.907049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.907075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.188 [2024-07-14 14:10:02.907293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.188 [2024-07-14 14:10:02.907322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.188 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.907491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.907516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.907675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.907703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.907797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.907825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.907966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.907992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.908203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.908231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.908336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.908364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.908503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.908528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.908668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.908693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.908854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.908885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.909016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.909041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.909129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.909153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.909348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.909389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.909547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.909572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.909664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.909689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.909825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.909852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.910001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.910026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.910168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.910193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.910279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.910322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.910493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.910518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.910682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.910709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.910807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.910835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.911006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.911032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.911175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.911232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.911369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.911400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.911518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.911544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.911687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.911712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.911822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.911849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.912002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.912027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.912121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.912147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.912260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.912287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.912422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.912447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.912592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.912635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.912765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.912793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.912909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.912935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.913063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.913089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.913183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.913212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.913329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.913353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.913466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.913491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.913603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.913630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.913750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.913776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.189 qpair failed and we were unable to recover it. 00:34:25.189 [2024-07-14 14:10:02.913971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.189 [2024-07-14 14:10:02.914021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.914157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.914184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.914315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.914341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.914461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.914486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.914621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.914649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.914793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.914818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.914937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.914963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.915092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.915116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.915257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.915282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.915377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.915403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.915544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.915573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.915710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.915735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.915855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.915889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.916013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.916038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.916151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.916176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.916286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.916310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.916455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.916483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.916615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.916640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.916783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.916807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.916935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.916961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.917109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.917134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.917256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.917299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.917422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.917449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.917623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.917648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.917805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.917833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.917980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.918006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.918120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.918144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.918287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.918329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.918496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.918521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.918717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.918742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.918874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.918923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.919017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.919043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.919140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.919165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.919276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.919301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.919383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.919408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.190 qpair failed and we were unable to recover it. 00:34:25.190 [2024-07-14 14:10:02.919557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.190 [2024-07-14 14:10:02.919582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.919698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.919727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.919860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.919895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.920033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.920058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.920176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.920201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.920339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.920366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.920501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.920526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.920637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.920662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.920785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.920813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.920953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.920980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.921111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.921136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.921230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.921255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.921348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.921373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.921484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.921509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.921644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.921672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.921884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.921910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.922069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.922095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.922212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.922237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.922353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.922378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.922497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.922522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.922656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.922683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.922886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.922929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.923045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.923070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.923180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.923225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.923426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.923451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.923545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.923570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.923720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.923745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.923827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.923853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.923955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.923985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.924082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.924107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.924249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.924275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.924383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.924409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.924523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.924548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.924670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.924696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.924781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.924807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.924960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.924986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.925132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.925157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.925276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.925318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.925477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.925502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.925618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.925643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.925842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.925870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.191 qpair failed and we were unable to recover it. 00:34:25.191 [2024-07-14 14:10:02.925984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.191 [2024-07-14 14:10:02.926008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.926129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.926154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.926274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.926299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.926410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.926438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.926606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.926631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.926741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.926781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.926883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.926935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.927051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.927076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.927274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.927302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.927443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.927471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.927632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.927657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.927799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.927841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.927987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.928012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.928107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.928132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.928272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.928301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.928499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.928524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.928638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.928662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.928800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.928825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.928919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.928945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.929063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.929088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.929227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.929252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.929361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.929388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.929487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.929512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.929651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.929676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.929791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.929819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.929958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.929985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.930109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.930134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.930301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.930326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.930474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.930499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.930615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.930639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.930763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.930788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.930868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.930901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.930988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.931013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.931131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.931160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.931273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.931298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.931410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.931436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.931578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.931606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.931735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.931763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.931861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.931897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.932008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.932034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.932154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.932179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.192 [2024-07-14 14:10:02.932290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.192 [2024-07-14 14:10:02.932332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.192 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.932466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.932494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.932638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.932663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.932754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.932779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.932894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.932924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.933064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.933090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.933181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.933207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.933314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.933342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.933456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.933480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.933599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.933624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.933738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.933765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.933910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.933946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.934039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.934064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.934193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.934222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.934345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.934370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.934486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.934511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.934628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.934653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.934766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.934791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.934945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.934971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.935080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.935108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.935269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.935294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.935406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.935431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.935543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.935571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.935685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.935711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.935849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.935874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.936022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.936050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.936159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.936184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.936298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.936323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.936472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.936500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.936611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.936635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.936755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.936779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.936927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.936956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.937068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.937093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.937184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.937209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.937296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.937320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.937461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.937486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.937602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.937645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.937740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.937767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.937966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.937992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.938087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.938129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.938231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.938259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.193 [2024-07-14 14:10:02.938373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.193 [2024-07-14 14:10:02.938403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.193 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.938553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.938578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.938673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.938701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.938828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.938853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.938953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.938978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.939193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.939221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.939333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.939358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.939451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.939476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.939570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.939595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.939709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.939734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.939875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.939923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.940024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.940052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.940190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.940215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.940306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.940331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.940434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.940459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.940545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.940570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.940664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.940689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.940812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.940840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.940977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.941003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.941091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.941115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.941223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.941251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.941380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.941405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.941494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.941519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.941636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.941664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.941801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.941843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.941954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.941979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.942095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.942121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.942250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.942281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.942403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.942428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.942559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.942587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.942703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.942729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.942818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.942843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.942947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.942973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.943089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.943115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.943207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.943232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.194 [2024-07-14 14:10:02.943404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.194 [2024-07-14 14:10:02.943429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.194 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.943551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.943576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.943692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.943717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.943832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.943857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.943993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.944031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.944130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.944157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.944267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.944312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.944445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.944489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.944590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.944619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.944725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.944751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.944843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.944870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.945001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.945026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.945119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.945144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.945249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.945274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.945392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.945417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.945509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.945535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.945662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.945687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.945781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.945806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.945922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.945948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.946056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.946089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.946254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.946282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.946408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.946436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.946543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.946571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.946714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.946740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.946856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.946888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.946992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.947020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.947139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.947167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.947300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.947328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.947462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.947504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.947657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.947685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.947815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.947843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.947990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.948015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.948123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.948163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.948371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.948399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.948569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.948597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.948726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.948753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.948918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.948944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.195 [2024-07-14 14:10:02.949038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.195 [2024-07-14 14:10:02.949063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.195 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.949160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.949185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.949298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.949323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.949426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.949454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.949553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.949581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.949680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.949707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.949812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.949837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.949939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.949965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.950083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.950108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.950224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.950253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.950341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.950384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.950506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.950534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.950628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.950655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.950741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.950781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.950895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.950921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.951008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.951033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.951115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.951140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.951247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.951274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.951398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.951427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.951521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.951549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.951678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.951706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.951829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.951869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.952009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.952036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.952145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.952172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.952285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.952311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.952400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.952426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.952524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.952550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.952666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.952691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.952790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.952817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.952937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.952964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.953078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.953104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.953219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.953245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.953361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.953387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.953507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.953534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.953731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.953756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.953841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.953866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.953985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.954017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.954139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.954168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.954274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.954299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.954422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.954449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.954549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.954577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.196 qpair failed and we were unable to recover it. 00:34:25.196 [2024-07-14 14:10:02.954675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.196 [2024-07-14 14:10:02.954704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.954840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.954870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.955040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.955084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.955197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.955241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.955355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.955398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.955509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.955552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.955644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.955670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.955762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.955789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.955905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.955931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.956017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.956042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.956131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.956156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.956239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.956283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.956392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.956420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.956546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.956574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.956726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.956754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.956850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.956885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.957015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.957043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.957137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.957165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.957266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.957294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.957399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.957427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.957558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.957586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.957685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.957712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.957822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.957855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.957959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.957985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.958092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.958121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.958254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.958283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.958433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.958480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.958594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.958620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.958710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.958736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.958824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.958849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.958963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.958989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.959077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.959102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.959216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.959241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.959358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.959383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.959525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.959553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.959680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.959708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.959848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.959873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.959996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.960020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.960115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.960140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.960277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.960305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.960411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.197 [2024-07-14 14:10:02.960436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.197 qpair failed and we were unable to recover it. 00:34:25.197 [2024-07-14 14:10:02.960574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.960602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.960736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.960764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.960897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.960942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.961045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.961072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.961182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.961212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.961369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.961414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.961548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.961592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.961707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.961734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.961829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.961860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.962084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.962112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.962210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.962238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.962335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.962362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.962500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.962527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.962659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.962688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.962799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.962824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.962967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.962992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.963080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.963105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.963262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.963290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.963405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.963433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.963636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.963663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.963792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.963817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.963897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.963922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.964025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.964050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.964145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.964173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.964293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.964338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.964449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.964493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.964633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.964676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.964795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.964820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.964925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.964954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.965059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.965084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.965173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.965199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.965289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.965315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.965409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.965436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.965556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.965581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.965666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.965691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.965889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.965919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.966033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.966058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.966146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.966171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.966276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.966307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.966460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.966506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.966600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.198 [2024-07-14 14:10:02.966625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.198 qpair failed and we were unable to recover it. 00:34:25.198 [2024-07-14 14:10:02.966713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.966739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.966824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.966850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.966953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.966982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.967091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.967117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.967208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.967233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.967324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.967349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.967470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.967497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.967621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.967646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.967734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.967759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.967846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.967871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.967963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.967988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.968103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.968128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.968211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.968235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.968355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.968379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.968495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.968520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.968634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.968663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.968790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.968817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.969017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.969043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.969151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.969179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.969280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.969308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.969466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.969494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.969648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.969684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.969826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.969852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.969952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.969978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.970109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.970137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.970266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.970311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.970474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.970519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.970665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.970691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.970776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.970802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.970899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.970925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.971009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.971034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.971121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.971146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.971257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.971282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.971415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.199 [2024-07-14 14:10:02.971460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.199 qpair failed and we were unable to recover it. 00:34:25.199 [2024-07-14 14:10:02.971564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.971592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.971700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.971726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.971841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.971867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.972015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.972058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.972148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.972173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.972289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.972333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.972413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.972439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.972522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.972548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.972668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.972694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.972783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.972808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.972907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.972933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.973035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.973062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.973186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.973214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.973336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.973377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.973506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.973534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.973661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.973687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.973784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.973810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.973953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.973978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.974062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.974088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.974205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.974231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.974325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.974351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.974441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.974466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.974550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.974576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.974696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.974722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.974812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.974837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.974940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.974966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.975052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.975077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.975168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.975193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.975293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.975319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.975415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.975442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.975540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.975567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.975667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.975694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.975806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.975834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.975945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.975975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.976103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.976148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.976280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.976322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.976433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.976478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.976598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.976624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.976734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.976759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.976851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.976883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.977000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.200 [2024-07-14 14:10:02.977027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.200 qpair failed and we were unable to recover it. 00:34:25.200 [2024-07-14 14:10:02.977149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.977193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.977344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.977369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.977453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.977479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.977579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.977606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.977721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.977746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.977867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.977898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.978015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.978040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.978121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.978146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.978237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.978262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.978366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.978394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.978595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.978623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.978824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.978852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.979017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.979062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.979171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.979220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.979380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.979424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.979529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.979557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.979663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.979688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.979804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.979829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.980032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.980058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.980256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.980282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.980392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.980417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.980529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.980554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.980639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.980663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.980758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.980783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.980978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.981004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.981094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.981119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.981206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.981231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.981344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.981385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.981494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.981522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.981645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.981673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.981770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.981795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.981915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.981940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.982038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.982063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.982189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.982216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.982418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.982445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.982574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.982602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.982758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.982804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.982894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.982920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.983038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.983063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.983176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.983204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.983350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.201 [2024-07-14 14:10:02.983400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.201 qpair failed and we were unable to recover it. 00:34:25.201 [2024-07-14 14:10:02.983532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.983575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.983719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.983745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.983857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.983887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.983985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.984009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.984112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.984140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.984271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.984299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.984430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.984458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.984571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.984596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.984680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.984706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.984819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.984844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.984967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.984993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.985135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.985175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.985297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.985325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.985434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.985461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.985567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.985595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.985718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.985746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.985891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.985917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.986010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.986035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.986133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.986158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.986274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.986298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.986397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.986422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.986557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.986584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.986699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.986740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.986866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.986900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.987011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.987035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.987151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.987176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.987316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.987341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.987481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.987509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.987632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.987660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.987808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.987836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.987954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.987980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.988075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.988100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.988189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.988215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.988355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.988383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.988544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.988571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.988677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.988705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.988853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.988887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.989028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.989053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.989130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.989155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.989261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.989289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.989453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.202 [2024-07-14 14:10:02.989481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.202 qpair failed and we were unable to recover it. 00:34:25.202 [2024-07-14 14:10:02.989607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.989634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.989736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.989764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.989899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.989939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.990070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.990097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.990201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.990230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.990380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.990423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.990540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.990566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.990683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.990708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.990795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.990822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.990940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.990966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.991058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.991100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.991263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.991288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.991401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.991427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.991568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.991593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.991683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.991708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.991830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.991855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.991976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.992001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.992088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.992113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.992251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.992277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.992383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.992411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.992604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.992632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.992734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.992761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.992888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.992931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.993053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.993078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.993172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.993199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.993344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.993372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.993483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.993525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.993614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.993641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.993744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.993773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.993913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.993952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.994086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.994114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.994223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.994253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.994365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.994390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.994536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.994561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.994648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.203 [2024-07-14 14:10:02.994675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.203 qpair failed and we were unable to recover it. 00:34:25.203 [2024-07-14 14:10:02.994791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.994817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.994942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.994967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.995056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.995081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.995181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.995209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.995361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.995389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.995500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.995528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.995624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.995652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.995780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.995808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.995950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.995976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.996090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.996115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.996283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.996311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.996480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.996522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.996707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.996734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.996835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.996863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.996976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.997002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.997137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.997164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.997321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.997349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.997474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.997502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.997665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.997726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.997852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.997886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.998054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.998100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.998203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.998246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.998365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.998391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.998536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.998562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.998682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.998707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.998817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.998843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.998964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.998991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.999109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.999135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.999245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.999271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.999364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.999389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.999503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.999531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.999677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.999702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.999847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:02.999872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:02.999999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:03.000025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:03.000113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:03.000138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:03.000254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:03.000279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:03.000395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:03.000422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:03.000509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:03.000535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:03.000626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:03.000651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:03.000749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:03.000775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:03.000890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.204 [2024-07-14 14:10:03.000916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.204 qpair failed and we were unable to recover it. 00:34:25.204 [2024-07-14 14:10:03.001032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.001057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.001174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.001201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.001288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.001314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.001430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.001456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.001608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.001635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.001755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.001780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.001910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.001939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.002099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.002127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.002289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.002317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.002410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.002437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.002617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.002666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.002756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.002781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.002867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.002898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.003016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.003058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.003191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.003234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.003398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.003441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.003552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.003583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.003712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.003755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.003952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.003978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.004114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.004142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.004266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.004293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.004446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.004474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.004627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.004654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.004777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.004818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.004905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.004930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.005050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.005078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.005204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.005247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.005385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.005414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.005543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.005571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.005667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.005710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.005806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.005830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.005936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.005962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.006087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.006112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.006204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.006229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.006373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.006401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.006527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.006555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.006673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.006702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.006823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.006848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.007058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.007084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.007229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.007256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.007356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.205 [2024-07-14 14:10:03.007386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.205 qpair failed and we were unable to recover it. 00:34:25.205 [2024-07-14 14:10:03.007503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.007544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.007700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.007727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.007847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.007874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.007998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.008027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.008142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.008167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.008279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.008307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.008429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.008456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.008557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.008585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.008680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.008708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.008832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.008871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.009031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.009058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.009192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.009236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.009401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.009445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.009555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.009597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.009738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.009764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.009859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.009890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.010012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.010040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.010134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.010162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.010325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.010353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.010451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.010480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.010633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.010661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.010769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.010794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.010911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.010937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.011052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.011077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.011207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.011235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.011368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.011396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.011490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.011517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.011638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.011666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.011799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.011826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.011957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.011983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.012180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.012209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.012374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.012402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.012554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.012582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.012761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.012789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.012914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.012957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.013075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.013100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.013223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.013248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.013367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.013408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.013504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.013532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.013655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.013684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.013820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.206 [2024-07-14 14:10:03.013849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.206 qpair failed and we were unable to recover it. 00:34:25.206 [2024-07-14 14:10:03.013985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.014024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.014139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.014177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.014311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.014355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.014526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.014569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.014659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.014684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.014808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.014834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.014942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.014969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.015083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.015108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.015218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.015248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.015380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.015425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.015548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.015576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.015708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.015736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.015870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.015903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.015997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.016021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.016132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.016160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.016348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.016376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.016498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.016530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.016655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.016683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.016852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.016899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.017040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.017079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.017180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.017222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.017328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.017356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.017471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.017514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.017610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.017638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.017741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.017770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.017906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.017931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.018044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.018070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.018188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.018215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.018369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.018397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.018502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.018530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.018628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.018656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.018805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.018833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.018978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.019003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.019114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.207 [2024-07-14 14:10:03.019139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.207 qpair failed and we were unable to recover it. 00:34:25.207 [2024-07-14 14:10:03.019303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.019328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.019441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.019466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.019611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.019639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.019786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.019814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.019957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.019983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.020097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.020122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.020264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.020292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.020421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.020450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.020568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.020596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.020735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.020762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.020897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.020939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.021029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.021054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.021181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.021209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.021333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.021362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.021461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.021489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.021681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.021738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.021860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.021893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.022015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.022041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.022178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.022221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.022362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.022404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.022523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.022550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.022647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.022674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.022788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.022814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.022989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.023032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.023135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.023164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.023320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.023348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.023481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.023509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.023630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.023678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.023775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.023803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.023920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.023946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.024039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.024066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.024230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.024274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.024362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.024388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.024556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.024600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.024739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.024765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.024886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.024932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.025066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.025101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.025201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.025242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.025373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.025400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.025606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.025636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.025763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.208 [2024-07-14 14:10:03.025791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.208 qpair failed and we were unable to recover it. 00:34:25.208 [2024-07-14 14:10:03.025901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.025926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.026040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.026065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.026165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.026192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.026314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.026358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.026472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.026497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.026613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.026640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.026811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.026836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.026955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.026980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.027095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.027119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.027336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.027364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.027492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.027519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.027614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.027642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.027771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.027795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.027960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.027988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.028110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.028138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.028249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.028288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.028403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.028433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.028614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.028658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.028770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.028796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.028891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.028917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.029046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.029089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.029220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.029248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.029456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.029519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.029657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.029687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.029810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.029840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.029984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.030010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.030097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.030122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.030273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.030302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.030402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.030429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.030560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.030588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.030713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.030739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.030851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.030885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.030978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.031003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.031089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.031113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.031252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.031277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.031374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.031420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.031555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.031582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.031762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.031790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.031930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.031955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.032045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.032070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.209 qpair failed and we were unable to recover it. 00:34:25.209 [2024-07-14 14:10:03.032211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.209 [2024-07-14 14:10:03.032238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.032393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.032421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.032542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.032569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.032688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.032726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.032848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.032881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.032983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.033010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.033111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.033139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.033254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.033280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.033400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.033426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.033554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.033580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.033692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.033719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.033851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.033898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.033990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.034017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.034136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.034162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.034283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.034308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.034425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.034450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.034540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.034565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.034697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.034724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.034843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.034904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.035049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.035075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.035207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.035233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.035344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.035384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.035509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.035543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.035666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.035694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.035781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.035808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.035947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.035973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.036094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.036120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.036208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.036233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.036372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.036397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.036500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.036528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.036657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.036686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.036789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.036817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.036941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.036980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.037100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.037127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.037293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.037337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.037505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.037548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.037647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.037673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.037791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.037817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.037919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.037949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.038042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.038070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.038192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.038220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.038341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.038368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.038505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.210 [2024-07-14 14:10:03.038546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.210 qpair failed and we were unable to recover it. 00:34:25.210 [2024-07-14 14:10:03.038699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.038726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.038837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.038863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.038958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.038984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.039100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.039125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.039224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.039252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.039374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.039402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.039507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.039540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.039685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.039710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.039796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.039821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.039911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.039936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.040023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.040048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.040161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.040186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.040326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.040354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.040469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.040511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.040629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.040657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.040781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.040809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.040963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.041002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.041144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.041182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.041318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.041348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.041478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.041505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.041697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.041726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.041857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.041891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.042003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.042029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.042135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.042162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.042257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.042285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.042410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.042438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.042565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.042593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.042685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.042713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.042855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.042895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.043028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.043053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.043142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.043184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.043276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.043303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.043398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.043426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.043572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.043622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.043756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.043785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.043937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.043976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.044090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.044121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.044277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.044321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.044450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.044495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.044674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.044730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.044829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.044858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.045009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.045035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.045171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.045199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.045317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.045367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.211 qpair failed and we were unable to recover it. 00:34:25.211 [2024-07-14 14:10:03.045544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.211 [2024-07-14 14:10:03.045594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.045727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.045754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.045900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.045926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.046041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.046085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.046208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.046252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.046359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.046396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.046565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.046591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.046701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.046728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.046868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.046916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.047038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.047066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.047191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.047219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.047373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.047420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.047585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.047632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.047745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.047771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.047910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.047968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.048119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.048161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.048326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.048375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.048577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.048605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.048735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.048764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.048939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.048965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.049078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.049103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.049222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.049247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.049418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.049446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.049557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.049598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.049711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.049736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.212 qpair failed and we were unable to recover it. 00:34:25.212 [2024-07-14 14:10:03.049865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.212 [2024-07-14 14:10:03.049914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.050040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.050066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.050183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.050225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.050410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.050458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.050565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.050596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.050762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.050806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.050933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.050959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.051080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.051105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.051243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.051286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.051441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.051494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.051647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.051698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.051857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.051895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.052002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.052028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.052141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.052167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.052245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.052270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.052419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.052472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.052681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.052734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.052863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.052901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.053046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.053071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.053212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.053237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.053350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.053391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.053532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.053578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.053721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.053746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.053838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.053863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.053986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.054012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.054102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.054127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.054211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.054253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.054340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.054367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.054465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.054493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.054619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.054647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.054733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.054760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.054900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.054931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.055050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.055077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.055161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.055204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.055399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.055426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.055547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.055575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.055700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.055728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.055869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.055920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.056070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.056097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.056248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.213 [2024-07-14 14:10:03.056291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.213 qpair failed and we were unable to recover it. 00:34:25.213 [2024-07-14 14:10:03.056429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.056460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.056627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.056690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.056831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.056858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.056987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.057015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.057132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.057172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.057316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.057342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.057434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.057458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.057569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.057600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.057771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.057797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.057900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.057927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.058048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.058074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.058193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.058219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.058339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.058382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.058540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.058592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.058705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.058747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.058903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.058929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.059011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.059036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.059146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.059171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.059266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.059295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.059431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.059459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.059582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.059610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.059735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.059762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.059891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.059934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.060078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.060103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.060186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.060211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.060344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.060372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.060559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.060586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.060713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.060741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.060870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.060919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.061036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.061061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.061152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.061176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.061281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.061308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.061440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.061468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.061567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.061595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.061723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.061750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.061883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.061926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.062018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.062043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.062158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.062183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.062296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.062324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.062423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.062451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.062539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.062567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.062765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.062823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-14 14:10:03.062964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.214 [2024-07-14 14:10:03.063002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.063133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.063172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.063353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.063384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.063499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.063549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.063680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.063710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.063867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.063921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.064016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.064041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.064185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.064226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.064333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.064385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.064532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.064583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.064678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.064706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.064807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.064838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.065017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.065044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.065179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.065209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.065339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.065382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.065484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.065515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.065612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.065641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.065797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.065826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.065964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.065991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.066108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.066137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.066246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.066274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.066378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.066403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.066544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.066595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.066769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.066812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.066952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.066991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.067142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.067169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.067289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.067333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.067456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.067495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.067644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.067674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.067804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.067833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.067981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.068017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.068135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.068161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.068250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.068275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.068411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.068441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.068629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.068658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.068788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.068817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.068960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.068987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.069067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.069093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.069210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.069237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.069408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.069464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.069569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.069596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.069716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.069745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.069894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.069952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.070085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.070112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-14 14:10:03.070297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.215 [2024-07-14 14:10:03.070351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.070453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.070481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.070684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.070736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.070852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.070894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.071001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.071026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.071142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.071168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.071282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.071323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.071462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.071487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.071614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.071658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.071779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.071807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.071950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.071976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.072064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.072089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.072187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.072213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.072346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.072378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.072488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.072514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.072691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.072718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.072806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.072834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.073006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.073045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.073140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.073167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.073263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.073291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.073461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.073506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.073652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.073700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.073844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.073888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.074019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.074046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.074183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.074213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.074350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.074414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.074577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.074625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.074766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.074810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.074976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.075003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.075102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.075130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.075273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.075303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.078027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.078067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.078206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.078237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.078344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.078374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.078516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.078561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.078707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.078733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-14 14:10:03.078909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.216 [2024-07-14 14:10:03.078954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.079073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.079100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.079214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.079241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.079379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.079410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.079543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.079573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.079729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.079772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.079939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.079978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.080098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.080125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.080273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.080317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.080458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.080510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.080618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.080643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.080772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.080797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.080952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.080991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.081118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.081144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.081236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.081262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.081406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.081439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.081561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.081605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.081760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.081794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.081952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.081979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.082101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.082128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.082221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.082247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.082386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.082419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.082574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.082601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.082730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.082757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.082912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.082938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.083021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.083046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.083188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.083212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.083326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.083354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.083463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.083505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.083655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.083683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.083813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.083841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.084035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.084074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.084210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.084257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.084363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.084411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.084572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.084628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.084772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.084798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.084915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.084942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.085056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.085083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.085224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.085264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.085385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.085413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.085501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.085528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.085631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.085689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.085812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.085841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.085996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.086023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.217 [2024-07-14 14:10:03.086137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.217 [2024-07-14 14:10:03.086183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.217 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.086326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.086354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.086482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.086510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.086603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.086631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.086757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.086785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.086872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.086923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.087054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.087079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.087233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.087261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.087380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.087408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.087518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.087559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.087706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.087731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.087820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.087845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.087947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.087974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.088089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.088114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.088255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.088283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.088430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.088458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.088585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.088613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.088732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.088777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.088897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.088927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.089023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.089051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.089150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.089192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.089326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.089371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.089535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.089578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.089746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.089776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.089894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.089921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.090011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.090038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.090176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.090209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.090369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.090428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.090580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.090632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.090734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.090765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.090940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.090966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.091054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.091079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.091262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.091287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.091435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.091487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.091640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.091688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.091818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.091847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.091996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.092021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.092112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.092137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.092282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.092329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.092525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.092577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.092705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.092734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.092897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.092923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.093041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.093067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.093160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.093186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.093314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.218 [2024-07-14 14:10:03.093342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.218 qpair failed and we were unable to recover it. 00:34:25.218 [2024-07-14 14:10:03.093456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.093498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.093612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.093656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.093801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.093854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.094008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.094035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.094125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.094150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.094283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.094311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.094436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.094464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.094579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.094633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.094760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.094788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.094935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.094966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.095085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.095110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.095253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.095281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.095474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.095502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.095663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.095691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.095813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.095840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.095993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.096019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.096111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.096138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.096229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.096254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.096410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.096438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.096540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.096568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.096722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.096750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.096887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.096916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.097018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.097043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.097138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.097181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.097284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.097310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.097475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.097503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.097629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.097657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.097811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.097850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.097937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.097965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.098072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.098111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.098289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.098336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.098517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.098557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.098695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.098723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.098874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.098909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.099018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.099044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.099156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.099181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.099311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.099344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.099442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.099470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.099584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.099609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.099724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.099752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.100964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.100996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.101117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.101159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.101266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.101295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.101494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.101522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.219 qpair failed and we were unable to recover it. 00:34:25.219 [2024-07-14 14:10:03.101621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.219 [2024-07-14 14:10:03.101649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.101780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.101808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.101924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.101950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.102058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.102084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.102174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.102217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.102343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.102372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.102473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.102501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.102651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.102679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.102856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.102908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.103006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.103033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.103149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.103175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.103299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.103352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.103486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.103527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.103732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.103785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.103903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.103930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.104025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.104050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.104153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.104192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.104312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.104339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.104455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.104480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.104563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.104592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.104707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.104732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.104816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.104841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.104935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.104961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.105078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.105122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.105252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.105295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.105382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.105408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.105518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.105544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.105666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.105692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.105806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.105833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.105933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.105959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.106045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.106071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.106157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.106184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.106311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.106337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.106434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.106460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.106568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.106596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.106722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.106749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.106906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.106949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.107079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.220 [2024-07-14 14:10:03.107107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.220 qpair failed and we were unable to recover it. 00:34:25.220 [2024-07-14 14:10:03.107196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.107224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.107350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.107377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.107495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.107523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.107674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.107701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.107805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.107848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.107985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.108014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.108129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.108172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.108283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.108313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.108453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.108486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.108647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.108677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.108789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.108815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.108920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.108947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.109039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.109064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.109169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.109197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.109348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.109376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.109474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.109501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.109673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.109722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.109840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.109867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.109966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.109994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.110080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.110105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.110217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.110270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.110401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.110426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.110572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.110598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.110768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.110795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.110959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.110984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.111097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.111122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.111249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.111279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.111377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.111405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.111562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.111590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.111717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.111744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.111885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.111911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.111995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.112021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.112099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.112124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.112323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.112351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.112504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.112532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.112646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.112689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.112838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.112866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.113029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.113054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.113148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.113191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.113330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.113355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.113447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.113473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.113617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.113645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.113847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.113874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.113993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.221 [2024-07-14 14:10:03.114018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.221 qpair failed and we were unable to recover it. 00:34:25.221 [2024-07-14 14:10:03.114115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.114156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.114261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.114286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.114379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.114405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.114571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.114599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.114711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.114759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.114856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.114892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.115005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.115029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.115127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.115151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.115245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.115269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.115412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.115439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.115562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.115589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.115725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.115753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.115934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.115959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.116044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.116071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.116160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.116186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.116318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.116346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.116457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.116483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.116631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.116660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.116792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.116821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.116956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.116982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.117073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.117099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.117229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.117257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.117365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.117408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.117505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.117534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.117667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.117707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.117863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.117909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.118008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.118035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.118129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.118155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.118274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.118318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.118428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.118472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.118578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.118608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.118743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.118769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.118912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.118939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.119028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.119054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.119148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.119174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.119285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.119311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.119388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.119414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.119540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.119568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.119672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.119710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.119833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.119859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.119957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.119983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.120096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.120122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.120220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.120245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.222 [2024-07-14 14:10:03.120335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.222 [2024-07-14 14:10:03.120362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.222 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.120443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.120473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.120551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.120576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.120693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.120719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.120810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.120836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.120932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.120968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.121093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.121120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.121221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.121247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.121340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.121365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.121458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.121483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.121599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.121623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.121739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.121764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.121848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.121883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.223 qpair failed and we were unable to recover it. 00:34:25.223 [2024-07-14 14:10:03.122002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.223 [2024-07-14 14:10:03.122028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.122114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.122140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.122233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.122259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.122406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.122432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.122548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.122574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.122672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.122698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.122852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.122901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.123032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.123060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.123177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.123204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.123321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.123347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.123473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.123499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.123672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.123720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.123823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.123855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.123989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.124016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.124167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.124197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.124319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.124345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.124500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.124529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.124621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.124649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.124740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.124769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.124883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.124938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.125039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.125065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.125153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.125179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.125292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.125321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.125485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.125510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.125611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.125638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.125740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.125767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.125859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.125890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.125987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.126014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.126110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.126140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.126240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.126267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.126411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.497 [2024-07-14 14:10:03.126440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.497 qpair failed and we were unable to recover it. 00:34:25.497 [2024-07-14 14:10:03.126542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.126571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.126670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.126699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.126835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.126860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.126979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.127005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.127091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.127118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.127207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.127233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.127346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.127371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.127476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.127505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.127622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.127664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.127768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.127811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.127941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.127968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.128065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.128091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.128200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.128225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.128328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.128356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.128473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.128502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.128630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.128658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.128744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.128772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.128871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.128905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.128991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.129016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.129099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.129124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.129219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.129243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.129325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.129350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.129426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.129451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.129537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.129562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.129639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.129668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.129762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.129789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.129896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.129935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.130064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.130093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.130216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.130257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.130360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.130390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.130531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.130575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.130700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.130729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.130819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.130861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.131015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.131054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.131162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.131193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.131324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.131350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.131440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.131465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.131550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.131575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.131677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.131705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.131796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.131823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.131913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.131939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.132063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.132090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.132223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.132249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.132340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.132366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.132450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.132476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.132562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.132589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.132746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.132771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.132866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.132900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.133043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.133069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.133173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.133201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.133360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.133405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.133508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.133544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.133702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.133731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.133828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.133856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.133973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.133999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.134120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.134164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.134288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.134317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.134423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.134452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.134591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.134617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.134737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.134766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.134890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.134916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.135003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.135029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.135116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.135143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.135230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.135256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.135359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.135385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.135490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.135519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.498 qpair failed and we were unable to recover it. 00:34:25.498 [2024-07-14 14:10:03.135650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.498 [2024-07-14 14:10:03.135679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.135774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.135803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.135948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.135974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.136057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.136082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.136166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.136191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.136354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.136383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.136491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.136532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.136618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.136646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.136773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.136800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.136909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.136935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.137032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.137057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.137147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.137173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.137268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.137294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.137382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.137407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.137577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.137606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.137741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.137766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.137889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.137931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.138018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.138043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.138175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.138214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.138376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.138403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.138556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.138599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.138710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.138739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.138851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.138886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.138973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.138998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.139089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.139114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.139229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.139259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.139362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.139387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.139506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.139534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.139709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.139737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.139847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.139872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.139970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.139996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.140081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.140106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.140204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.140230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.140319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.140345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.140477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.140511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.140626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.140668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.140787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.140815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.140910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.140956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.141074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.141099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.141188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.141213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.141299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.141324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.141442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.141466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.141556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.141581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.141670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.141695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.141806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.141831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.141919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.141945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.142030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.142054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.142141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.142166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.142255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.142280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.142367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.142392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.142474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.142498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.142608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.142632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.142718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.142749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.142862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.142894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.142982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.143007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.143087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.143113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.143221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.143246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.143389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.143414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.143497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.143522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.143642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.143667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.143753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.143777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.143867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.143903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.143985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.144010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.144157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.144185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.144371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.144396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.144507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.144536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.144696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.144724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.499 qpair failed and we were unable to recover it. 00:34:25.499 [2024-07-14 14:10:03.144858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.499 [2024-07-14 14:10:03.144888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.144979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.145005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.145096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.145121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.145234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.145262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.145371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.145413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.145517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.145547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.145666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.145691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.145816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.145855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.145951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.145979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.146068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.146093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.146182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.146209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.146329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.146355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.146440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.146488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.146618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.146647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.146746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.146774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.146867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.146903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.147033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.147058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.147150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.147175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.147283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.147311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.147408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.147435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.147527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.147555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.147675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.147703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.147823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.147851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.147976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.148014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.148109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.148136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.148241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.148270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.148421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.148466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.148605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.148633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.148765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.148790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.148904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.148931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.149050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.149076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.149180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.149208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.149334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.149371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.149476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.149503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.149618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.149644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.149740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.149765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.149849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.149888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.150012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.150041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.150137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.150164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.150284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.150324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.150448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.150475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.150572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.150599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.150717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.150743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.150834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.150858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.150962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.150988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.151072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.151097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.151200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.151228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.151321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.151350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.151456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.151484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.151607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.151634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.151717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.151743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.151857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.151890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.152018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.152067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.152233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.152281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.152444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.152489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.152571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.152597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.152716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.152742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.152827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.152852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.152949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.152975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.500 [2024-07-14 14:10:03.153062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.500 [2024-07-14 14:10:03.153089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.500 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.153207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.153233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.153348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.153393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.153509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.153537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.153648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.153673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.153784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.153811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.153899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.153926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.154031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.154070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.154195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.154222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.154369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.154413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.154606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.154656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.154765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.154793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.154924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.154953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.155074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.155102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.155212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.155240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.155366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.155394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.155512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.155566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.155672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.155699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.155820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.155846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.155981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.156026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.156153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.156196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.156294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.156322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.156447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.156475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.156577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.156605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.156702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.156744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.156866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.156899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.156998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.157023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.157135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.157160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.157271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.157299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.157491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.157520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.157629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.157658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.157759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.157787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.157974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.158001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.158088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.158113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.158288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.158333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.158528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.158556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.158645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.158674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.158773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.158802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.158943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.158970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.159068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.159094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.159206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.159236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.159340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.159368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.159497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.159526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.159650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.159678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.159811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.159836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.159936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.159962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.160052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.160077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.160178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.160204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.160321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.160347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.160448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.160477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.160585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.160611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.160738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.160768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.160901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.160954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.161049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.161075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.161187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.161213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.161336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.161364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.161468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.161497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.161625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.161659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.161802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.161845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.161970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.161997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.162114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.162171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.162293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.162321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.162431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.162472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.162601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.162629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.162752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.162781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.162890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.162915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.162994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.163019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.163133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.163176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.163276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.501 [2024-07-14 14:10:03.163303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.501 qpair failed and we were unable to recover it. 00:34:25.501 [2024-07-14 14:10:03.163427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.163455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.163548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.163576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.163742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.163799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.163894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.163922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.164042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.164068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.164160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.164186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.164280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.164329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.164441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.164466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.164584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.164610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.164687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.164713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.164800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.164825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.164952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.164980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.165067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.165092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.165181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.165206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.165289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.165314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.165408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.165434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.165529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.165555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.165671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.165696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.165776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.165806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.165917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.165944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.166029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.166055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.166144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.166170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.166278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.166306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.166398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.166425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.166574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.166618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.166773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.166801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.166902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.166930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.167043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.167086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.167199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.167244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.167410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.167453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.167560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.167589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.167689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.167714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.167832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.167858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.167950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.167976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.168100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.168130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.168221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.168247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.168369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.168394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.168507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.168533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.168654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.168679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.168810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.168849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.168943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.168969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.169103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.169148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.169238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.169264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.169377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.169402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.169520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.169546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.169667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.169695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.169813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.169842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.169968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.169994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.170077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.170102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.170218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.170243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.170330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.170372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.170494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.170521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.170649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.170677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.170776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.170805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.170950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.170978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.171091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.171135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.171273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.171316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.171421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.171451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.171582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.171611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.171725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.171752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.171872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.171908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.171995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.502 [2024-07-14 14:10:03.172020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.502 qpair failed and we were unable to recover it. 00:34:25.502 [2024-07-14 14:10:03.172110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.172135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.172220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.172246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.172335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.172360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.172445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.172470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.172600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.172644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.172764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.172790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.172884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.172910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.173011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.173039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.173211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.173254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.173359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.173388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.173533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.173559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.173647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.173672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.173790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.173816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.173942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.173969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.174086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.174111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.174207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.174233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.174318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.174343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.174463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.174488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.174577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.174602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.174717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.174741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.174857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.174896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.174984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.175008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.175123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.175148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.175268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.175298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.175409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.175438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.175613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.175657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.175747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.175774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.175865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.175897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.175987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.176013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.176098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.176123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.176209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.176235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.176326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.176353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.176444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.176469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.176599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.176623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.176745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.176770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.176865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.176909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.177033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.177060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.177177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.177206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.177307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.177336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.177441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.177470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.177567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.177596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.177692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.177721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.177811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.177841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.177955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.177982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.178112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.178139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.178225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.178249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.178358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.178386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.178517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.178545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.178699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.178727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.178851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.178890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.179034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.179064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.179173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.179215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.179345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.179373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.179492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.179534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.179656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.179684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.179814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.179842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.180029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.180068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.180213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.180259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.180364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.180408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.180585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.180633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.180722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.180749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.180872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.180904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.181009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.181037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.181191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.181216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.181340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.181367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.503 [2024-07-14 14:10:03.181459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.503 [2024-07-14 14:10:03.181486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.503 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.181625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.181650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.181740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.181765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.181883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.181926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.182048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.182076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.182185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.182213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.182310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.182337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.182466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.182494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.182581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.182609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.182705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.182732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.182822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.182850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.182975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.183000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.183109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.183142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.183238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.183266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.183388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.183416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.183509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.183537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.183635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.183662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.183764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.183792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.183906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.183934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.184020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.184046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.184181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.184209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.184369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.184414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.184520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.184548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.184680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.184705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.184796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.184822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.184940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.184965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.185059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.185084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.185198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.185226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.185353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.185381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.185483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.185510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.185645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.185672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.185762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.185790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.185899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.185925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.186008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.186033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.186152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.186177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.186325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.186350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.186492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.186520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.186614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.186642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.186764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.186805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.186925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.186954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.187048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.187073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.187180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.187208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.187302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.187330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.187431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.187458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.187573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.187617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.187719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.187750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.187920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.187947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.188042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.188068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.188215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.188270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.188434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.188479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.188593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.188637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.188753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.188779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.188896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.188922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.189031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.189060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.189239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.189282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.189422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.189451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.189585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.189615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.189721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.189750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.189850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.189885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.190014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.190044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.190151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.190181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.190310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.190339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.504 [2024-07-14 14:10:03.190521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.504 [2024-07-14 14:10:03.190566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.504 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.190688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.190714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.190813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.190838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.190944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.190970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.191078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.191108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.191291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.191335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.191441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.191469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.191603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.191629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.191738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.191764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.191885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.191913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.191996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.192021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.192111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.192137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.192251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.192278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.192416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.192443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.192602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.192627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.192719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.192744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.192827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.192852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.192981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.193006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.193092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.193117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.193215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.193243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.193339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.193366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.193458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.193485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.193640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.193686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.193801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.193827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.193951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.193980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.194073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.194100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.194207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.194236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.194349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.194375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.194515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.194541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.194650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.194679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.194785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.194810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.194903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.194931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.195019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.195046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.195163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.195189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.195311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.195352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.195513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.195542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.195691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.195720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.195844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.195874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.195997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.196024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.196114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.196140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.196256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.196300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.196402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.196431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.196554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.196583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.196720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.196763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.196893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.196926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.197047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.197073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.197151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.197176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.197406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.197434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.197568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.197612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.197722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.197750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.197869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.197904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.198029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.198055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.198135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.198160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.198268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.198296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.198406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.198431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.198546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.198574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.198660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.198688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.198785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.198810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.198902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.198928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.199043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.199068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.199157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.199183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.199277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.199302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.199433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.505 [2024-07-14 14:10:03.199475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.505 qpair failed and we were unable to recover it. 00:34:25.505 [2024-07-14 14:10:03.199601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.199629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.199730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.199758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.199854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.199892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.200030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.200055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.200135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.200177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.200266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.200293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.200416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.200444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.200545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.200573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.200660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.200688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.200801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.200826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.200915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.200941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.201052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.201077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.201174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.201213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.201362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.201408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.201515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.201560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.201707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.201733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.201842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.201871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.201997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.202025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.202151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.202180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.202280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.202305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.202421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.202447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.202563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.202590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.202747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.202786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.202889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.202916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.203034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.203061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.203179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.203204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.203324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.203350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.203430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.203456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.203582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.203609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.203728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.203752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.203848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.203884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.204026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.204052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.204135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.204177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.204295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.204323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.204476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.204504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.204655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.204687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.204855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.204888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.205002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.205028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.205110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.205153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.205253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.205295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.205450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.205497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.205593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.205622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.205784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.205809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.205905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.205944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.206048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.206077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.206221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.206267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.206367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.206403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.206559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.206587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.206719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.206745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.206893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.206938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.207022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.207048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.207161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.207186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.207312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.207341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.207432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.207460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.207617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.207645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.207781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.207806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.207895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.207920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.208036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.208062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.208205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.208233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.208358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.208386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.208518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.208547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.208633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.208661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.208781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.208814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.208941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.208967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.209076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.209102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.209209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.209235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.209363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.209392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.209536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.209564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.209687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.209715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.209811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.506 [2024-07-14 14:10:03.209840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.506 qpair failed and we were unable to recover it. 00:34:25.506 [2024-07-14 14:10:03.209958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.209983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.210078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.210104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.210211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.210239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.210352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.210380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.210511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.210539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.210642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.210670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.210841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.210866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.210985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.211011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.211135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.211176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.211312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.211337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.211448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.211473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.211603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.211631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.211782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.211810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.211920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.211946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.212040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.212065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.212141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.212166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.212284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.212309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.212427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.212455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.212921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.212966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.213111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.213138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.213262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.213290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.213439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.213467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.213598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.213626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.213727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.213756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.213863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.213896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.213979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.214005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.214089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.214114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.214208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.214233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.214391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.214419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.214519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.214546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.214664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.214692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.214827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.214855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.215042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.215081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.215219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.215258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.215371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.215401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.215527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.215572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.215733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.215776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.215894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.215920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.216029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.216058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.216195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.216221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.216311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.216337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.216451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.216477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.216566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.216592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.216711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.216736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.216852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.216886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.216977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.217002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.217091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.217121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.217204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.217230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.217364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.217389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.217501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.217527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.217630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.217658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.217775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.217819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.217940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.217968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.218056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.218083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.218296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.218326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.218452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.218481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.218634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.218663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.218793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.218822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.218965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.218990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.219104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.219129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.219269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.219316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.219416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.219444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.219565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.219593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.219740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.219768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.219866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.219902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.507 [2024-07-14 14:10:03.219993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.507 [2024-07-14 14:10:03.220018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.507 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.220093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.220118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.220251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.220279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.220411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.220454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.220610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.220653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.220756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.220787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.220913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.220941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.221034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.221060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.221171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.221206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.221331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.221361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.221506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.221535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.221652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.221680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.221792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.221818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.221933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.221958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.222049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.222074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.222164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.222189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.222295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.222320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.222416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.222444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.222551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.222576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.222693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.222724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.222819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.222847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.222990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.223017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.223112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.223138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.223246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.223274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.223403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.223448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.223579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.223609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.223706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.223734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.223846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.223871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.224006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.224032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.224113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.224138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.224301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.224329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.224442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.224467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.224635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.224664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.224797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.224822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.224913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.224948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.225040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.225072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.225166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.225191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.225329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.225357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.225530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.225559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.225666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.225693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.225836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.225864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.225988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.226014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.226097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.226123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.226264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.226289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.226427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.226455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.226612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.226640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.226749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.226774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.226860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.226891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.226993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.227020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.227132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.227172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.227336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.227362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.227532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.227560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.227653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.227681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.227777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.227819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.227901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.227928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.228022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.228047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.228222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.228250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.228430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.228458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.228597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.228638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.228770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.228798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.228908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.228934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.229048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.229073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.229173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.229206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.508 [2024-07-14 14:10:03.229361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.508 [2024-07-14 14:10:03.229389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.508 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.229515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.229543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.229631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.229659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.229783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.229822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.229918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.229953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.230096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.230123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.230252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.230295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.230438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.230486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.230620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.230649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.230809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.230835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.230958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.231004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.231143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.231189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.231361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.231405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.231527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.231553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.231670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.231696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.231810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.231836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.231947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.231973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.232067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.232094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.232176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.232202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.232315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.232341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.232484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.232510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.232626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.232652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.232793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.232819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.232910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.232936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.233068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.233112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.233242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.233285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.233401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.233445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.233582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.233608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.233738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.233776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.233864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.233898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.234046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.234074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.234168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.234197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.234327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.234356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.234452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.234479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.234588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.234615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.234737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.234762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.234846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.234871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.234981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.235009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.235137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.235165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.235319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.235353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.235534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.235579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.235698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.235724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.235835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.235861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.235968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.235995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.236094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.236123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.236280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.236322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.236516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.236565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.236693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.236722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.236883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.236909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.236999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.237026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.237143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.237188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.237295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.237321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.237488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.237538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.237698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.237728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.237839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.237864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.237994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.238020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.238151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.238179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.238311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.238336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.238454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.238480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.238617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.238645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.238784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.238810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.238915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.238960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.239084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.239111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.239224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.239250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.239338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.239380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.239472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.239499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.239617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.239651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.509 [2024-07-14 14:10:03.239780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.509 [2024-07-14 14:10:03.239811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.509 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.239966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.239993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.240088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.240113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.240258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.240300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.240426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.240454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.240552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.240580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.240679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.240707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.240857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.240890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.241021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.241047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.241162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.241188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.241348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.241390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.241522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.241552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.241687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.241715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.241863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.241898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.241990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.242016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.242109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.242134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.242275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.242304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.242458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.242486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.242640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.242668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.242759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.242789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.242950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.242989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.243092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.243119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.243232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.243262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.243382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.243433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.243614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.243663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.243764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.243789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.243884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.243910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.243999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.244023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.244134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.244161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.244317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.244344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.244500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.244528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.244625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.244653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.244775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.244802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.244929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.244967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.245090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.245116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.245232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.245257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.245366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.245394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.245504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.245545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.245652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.245695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.245819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.245865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.246008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.246034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.246121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.246146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.246302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.246329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.246509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.246538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.246698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.246727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.246854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.246889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.246999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.247024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.247111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.247136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.247265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.247302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.247481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.247533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.247632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.247662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.247803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.247828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.247918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.247944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.248068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.248093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.248205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.248234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.248419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.248469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.248614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.248664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.248794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.248823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.248959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.248984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.249077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.249102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.249229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.510 [2024-07-14 14:10:03.249254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.510 qpair failed and we were unable to recover it. 00:34:25.510 [2024-07-14 14:10:03.249362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.249390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.249483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.249511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.249606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.249633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.249726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.249753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.249849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.249901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.250016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.250046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.250168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.250193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.250304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.250329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.250430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.250458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.250617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.250645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.250798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.250826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.250969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.251008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.251095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.251121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.251265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.251291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.251395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.251424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.251581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.251628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.251753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.251781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.251903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.251946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.252039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.252065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.252168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.252195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.252373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.252398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.252571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.252600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.252725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.252754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.252874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.252926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.253015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.253040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.253152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.253194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.253306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.253332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.253502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.253531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.253686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.253713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.253842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.253870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.254004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.254030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.254137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.254165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.254273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.254304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.254427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.254468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.254605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.254632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.254776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.254819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.254959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.254986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.255098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.255123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.255226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.255254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.255386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.255429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.255547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.255572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.255711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.255738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.255885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.255912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.256032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.256057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.256207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.256234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.256341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.256366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.256538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.256567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.256723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.256751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.256846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.256873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.257018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.257043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.257142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.257169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.257299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.257327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.257426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.257453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.257568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.257618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.257773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.257800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.257941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.257968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.258085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.258110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.258199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.258225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.258370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.258396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.258539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.258568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.258676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.258701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.511 qpair failed and we were unable to recover it. 00:34:25.511 [2024-07-14 14:10:03.258843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.511 [2024-07-14 14:10:03.258870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.259020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.259045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.259161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.259186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.259299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.259341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.259446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.259475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.259632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.259661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.259777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.259805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.259910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.259953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.260061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.260086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.260185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.260211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.260351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.260393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.260510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.260539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.260682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.260708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.260839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.260869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.260981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.261007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.261096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.261123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.261260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.261288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.261418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.261443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.261564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.261589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.261732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.261759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.261867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.261901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.262018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.262042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.262175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.262203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.262308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.262333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.262454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.262479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.262595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.262620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.262734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.262759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.262859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.262904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.263023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.263051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.263183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.263222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.263371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.263398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.263534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.263578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.263665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.263691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.263832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.263858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.263964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.264002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.264127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.264153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.264254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.264282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.264402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.264451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.264582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.264618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.264771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.264802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.264903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.264934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.265052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.265079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.265210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.265237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.265395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.265423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.265575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.265616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.265722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.265753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.265853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.265887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.266053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.266080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.266170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.266196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.266388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.266439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.266586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.266628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.266777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.266806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.266942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.266968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.267057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.267082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.267198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.267223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.267368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.267418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.267562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.267612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.267773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.267801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.267908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.267935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.268024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.268049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.268132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.268157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.268296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.268321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.268404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.268429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.268566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.268593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.268701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.268744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.268928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.268972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.269074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.269102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.512 [2024-07-14 14:10:03.269240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.512 [2024-07-14 14:10:03.269265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.512 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.269360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.269386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.269540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.269592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.269722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.269749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.269840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.269868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.270011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.270036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.270176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.270202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.270315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.270358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.270482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.270510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.270619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.270661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.270823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.270851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.270964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.270989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.271107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.271133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.271292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.271320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.271438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.271481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.271620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.271649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.271739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.271767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.271895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.271938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.272090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.272115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.272205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.272247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.272344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.272372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.272464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.272491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.272620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.272647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.272801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.272831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.272985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.273012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.273141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.273179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.273325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.273375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.273486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.273511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.273722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.273771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.273895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.273940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.274052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.274077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.274194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.274219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.274368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.274395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.274506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.274531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.274708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.274740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.274830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.274858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.275032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.275057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.275190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.275219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.275317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.275345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.275508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.275538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.275639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.275668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.275773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.275801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.275965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.276004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.276102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.276129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.276253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.276279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.276410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.276453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.276648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.276701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.276792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.276818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.276952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.276978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.277094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.277119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.277204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.277229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.277343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.277368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.277515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.277544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.277656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.277682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.277776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.277802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.277885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.277911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.278006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.278032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.278142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.278171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.278300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.278329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.278430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.278458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.278559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.278588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.278719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.513 [2024-07-14 14:10:03.278747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.513 qpair failed and we were unable to recover it. 00:34:25.513 [2024-07-14 14:10:03.278865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.278930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.279055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.279082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.279221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.279265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.279436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.279471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.279625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.279673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.279796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.279824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.279957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.279984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.280072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.280097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.280209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.280236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.280392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.280439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.280555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.280602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.280699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.280727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.280817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.280844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.280982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.281020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.281109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.281135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.281250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.281275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.281416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.281445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.281583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.281611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.281762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.281805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.281955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.281982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.282097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.282122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.282201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.282226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.282353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.282380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.282593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.282644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.282813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.282855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.283012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.283042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.283164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.283189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.283287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.283312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.283442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.283482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.283681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.283709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.283859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.283903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.284016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.284042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.284129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.284154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.284265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.284291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.284443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.284471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.284572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.284613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.284740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.284769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.284873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.284927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.285048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.285073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.285188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.285213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.285357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.285386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.285497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.285524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.285662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.285690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.285852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.285889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.286036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.286062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.286183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.286225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.286363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.286388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.286620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.286672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.286793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.286836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.286990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.287016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.287147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.287175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.287304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.287334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.287491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.287519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.287659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.287687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.287838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.287867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.288017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.288042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.288160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.288185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.288307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.288332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.288496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.288521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.288640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.288666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.288780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.288805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.288936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.288980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.289074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.289100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.289220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.289245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.289390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.289419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.289549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.289574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.289665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.289691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.289831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.289859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.514 qpair failed and we were unable to recover it. 00:34:25.514 [2024-07-14 14:10:03.289983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.514 [2024-07-14 14:10:03.290008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.290095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.290121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.290202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.290232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.290377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.290403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.290498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.290523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.290693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.290720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.290858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.290889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.291018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.291044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.291171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.291199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.291341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.291366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.291491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.291516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.291656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.291685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.291799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.291824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.291945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.291972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.292093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.292118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.292219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.292261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.292346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.292371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.292523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.292552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.292681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.292707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.292799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.292825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.292973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.293003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.293120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.293146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.293259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.293285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.293441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.293470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.293609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.293635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.293752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.293778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.293919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.293948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.294110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.294136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.294257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.294298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.294452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.294477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.294589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.294614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.294728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.294754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.294921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.294947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.295031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.295055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.295197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.295222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.295335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.295362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.295495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.295522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.295635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.295659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.295780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.295805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.295915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.295941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.296034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.296061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.296146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.296171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.296255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.296285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.296405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.296430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.296600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.296628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.296768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.296794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.296886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.296912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.297003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.297029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.297144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.297170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.297336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.297363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.297485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.297513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.297672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.297698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.297859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.297893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.297987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.298016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.298123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.298149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.298261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.298287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.298428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.298456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.298594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.298619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.298706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.298731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.298842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.298870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.299014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.299041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.299160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.299186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.299304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.299330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.299471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.299496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.299583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.299609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.299750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.299779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.299892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.299918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.300037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.515 [2024-07-14 14:10:03.300063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.515 qpair failed and we were unable to recover it. 00:34:25.515 [2024-07-14 14:10:03.300219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.300247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.300365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.300391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.300534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.300559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.300693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.300722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.300818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.300848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.301013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.301038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.301196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.301224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.301361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.301388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.301529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.301570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.301725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.301753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.301891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.301917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.302009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.302034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.302171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.302197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.302318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.302343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.302436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.302466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.302597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.302625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.302769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.302797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.302939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.302966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.303085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.303110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.303220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.303245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.303339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.303364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.303481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.303508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.303647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.303673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.303766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.303791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.303937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.303963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.304086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.304111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.304261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.304306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.304433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.304462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.304600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.304626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.304723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.304750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.304889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.304918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.305021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.305047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.305138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.305164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.305258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.305302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.305416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.305441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.305555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.305581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.305686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.305715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.305855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.305886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.306011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.306037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.306146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.306172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.306265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.306291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.306410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.306436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.306608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.306635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.306754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.306780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.306863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.306896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.307039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.307068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.307215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.307240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.307331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.307356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.307520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.307558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.307679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.307706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.307822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.307848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.307998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.308027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.308193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.308218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.308313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.308338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.308462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.308494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.308613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.308639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.308724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.308750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.308891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.308920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.309058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.309083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.309170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.309196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.309297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.309327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.309464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.309491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.309605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.309631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.309760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.309789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.309904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.309930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.310014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.310040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.310200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.310229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.310361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.310387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.310480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.310506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.310612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.310641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.310741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.310767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.310865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.310898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.311019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.311045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.311163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.311190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.311301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.311327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.311478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.311504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.311621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.311647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.311817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.516 [2024-07-14 14:10:03.311845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.516 qpair failed and we were unable to recover it. 00:34:25.516 [2024-07-14 14:10:03.311991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.312021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.312156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.312182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.312300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.312326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.312476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.312507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.312651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.312676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.312767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.312793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.312960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.312991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.313107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.313133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.313247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.313273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.313407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.313435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.313549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.313575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.313663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.313689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.313773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.313816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.313946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.313972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.314120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.314146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.314286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.314315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.314450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.314479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.314601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.314626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.314744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.314771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.314891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.314916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.315027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.315052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.315192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.315221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.315350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.315376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.315470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.315495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.315627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.315655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.315843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.315872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.316017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.316044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.316182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.316211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.316332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.316357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.316477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.316502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.316638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.316665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.316838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.316864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.316957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.316982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.317124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.317152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.317282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.317308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.317419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.317444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.317576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.317605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.317773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.317799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.317887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.317913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.318063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.318090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.318232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.318257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.318347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.318373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.318539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.318567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.318710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.318736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.318851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.318882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.319051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.319079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.319189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.319215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.319328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.319353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.319445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.319472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.319592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.319617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.319729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.319755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.319903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.319932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.320071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.320097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.320237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.320263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.320403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.320431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.320539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.320565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.320707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.320736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.320888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.320917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.321052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.321078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.321194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.321220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.321381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.321409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.321531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.321557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.321699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.321724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.321908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.321951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.322064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.322090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.322210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.322235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.322364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.322392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.322507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.322532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.322654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.322680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.322819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.322846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.323000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.323026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.323168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.323211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.323338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.323365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.323529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.323554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.323663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.517 [2024-07-14 14:10:03.323688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.517 qpair failed and we were unable to recover it. 00:34:25.517 [2024-07-14 14:10:03.323792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.323819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.323967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.323992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.324111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.324135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.324268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.324295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.324443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.324468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.324580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.324604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.324771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.324798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.324938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.324963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.325087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.325112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.325247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.325275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.325383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.325408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.325525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.325549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.325647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.325674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.325834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.325861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.325997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.326022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.326116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.326141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.326236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.326262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.326381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.326405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.326571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.326599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.326732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.326757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.326848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.326874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.326982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.327014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.327159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.327184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.327299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.327324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.327457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.327484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.327653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.327678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.327797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.327821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.327935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.327960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.328100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.328124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.328241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.328266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.328407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.328432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.328548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.328573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.328662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.328688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.328839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.328904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.329078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.329105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.329231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.329274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.329399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.329427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.329567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.329592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.329685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.329712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.329818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.329846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.329990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.330017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.330132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.330157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.330296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.330325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.330462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.330487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.330573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.330598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.330708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.330739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.330855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.330886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.331004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.331031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.331143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.331174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.331341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.331367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.331460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.331485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.331627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.331655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.331764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.331789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.331905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.331932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.332030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.332058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.332192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.332218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.332361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.332402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.332527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.332571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.332675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.332704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.332829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.332858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.332985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.333013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.333126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.333156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.333244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.333269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.333432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.333460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.333574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.333600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.333744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.333769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.333898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.333929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.334046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.334072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.334211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.334236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.334369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.334398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.334540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.334566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.334686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.334712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.518 qpair failed and we were unable to recover it. 00:34:25.518 [2024-07-14 14:10:03.334828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.518 [2024-07-14 14:10:03.334856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.334959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.334997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.335101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.335128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.335268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.335297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.335437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.335479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.335650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.335693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.335801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.335827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.335940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.335978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.336112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.336150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.336274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.336301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.336439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.336465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.336578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.336602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.336686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.336711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.336824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.336849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.336953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.336978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.337093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.337118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.337225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.337258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.337410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.337438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.337531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.337559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.337657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.337684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.337807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.337834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.337969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.338008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.338138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.338164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.338251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.338277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.338393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.338420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.338579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.338607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.338731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.338760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.338868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.338905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.339005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.339030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.339119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.339144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.339253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.339281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.339436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.339463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.339619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.339674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.339801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.339829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.339992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.340018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.340159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.340200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.340351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.340401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.340533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.340577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.340678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.340710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.340854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.340885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.340978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.341003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.341093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.341118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.341260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.341288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.341402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.341431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.341523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.341549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.341685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.341713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.341814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.341839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.341954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.341980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.342092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.342118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.342245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.342271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.342384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.342409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.342543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.342571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.342714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.342739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.342824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.342849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.342972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.342998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.343112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.343137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.343245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.343270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.343416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.343444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.343550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.343575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.343689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.343716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.343838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.343890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.344039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.344068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.344214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.344240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.344412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.344441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.344580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.344606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.344686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.344712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.344846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.344886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.345025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.345050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.345163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.345187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.345302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.345345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.345488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.345517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.345636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.345661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.345770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.345795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.345884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.345909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.346025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.346050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.346146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.346171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.346281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.346306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.346415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.346440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.346541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.519 [2024-07-14 14:10:03.346568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.519 qpair failed and we were unable to recover it. 00:34:25.519 [2024-07-14 14:10:03.346698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.346723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.346840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.346865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.346992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.347021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.347116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.347143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.347231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.347257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.347377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.347423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.347530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.347556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.347650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.347676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.347795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.347821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.347934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.347960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.348042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.348067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.348204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.348232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.348344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.348369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.348478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.348503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.348614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.348641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.348807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.348832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.348924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.348950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.349039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.349065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.349191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.349219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.349328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.349370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.349499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.349527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.349721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.349748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.349867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.349905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.350040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.350068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.350187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.350213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.350322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.350348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.350452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.350482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.350617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.350643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.350758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.350783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.350886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.350915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.351053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.351077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.351167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.351192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.351347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.351372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.351454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.351479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.351566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.351591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.351726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.351753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.351893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.351919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.352002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.352027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.352159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.352186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.352297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.352322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.352408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.352433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.352560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.352588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.352697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.352722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.352860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.352889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.353016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.353044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.353206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.353235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.353323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.353348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.353454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.353498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.353618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.353644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.353738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.353764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.353887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.353914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.354054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.354080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.354167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.354194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.354286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.354312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.354436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.354461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.354577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.354618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.354709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.354737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.354907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.354932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.355049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.355090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.355258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.355286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.355416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.355441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.355554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.355578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.355695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.355726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.355887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.355914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.356002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.356028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.356112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.356138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.356219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.356245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.356386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.356412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.356533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.356560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.356671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.356696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.356807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.356832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.356952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.356980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.357089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.520 [2024-07-14 14:10:03.357119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.520 qpair failed and we were unable to recover it. 00:34:25.520 [2024-07-14 14:10:03.357236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.357261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.357394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.357422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.357553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.357578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.357700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.357725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.357866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.357900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.358013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.358038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.358158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.358183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.358285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.358313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.358452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.358477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.358564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.358590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.358741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.358784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.358937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.358964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.359079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.359105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.359233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.359259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.359343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.359368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.359484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.359510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.359623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.359652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.359789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.359814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.359925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.359950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.360076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.360103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.360208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.360234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.360328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.360352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.360495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.360520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.360610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.360636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.360746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.360771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.360896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.360927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.361047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.361079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.361222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.361248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.361389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.361417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.361553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.361580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.361700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.361725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.361898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.361927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.362035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.362060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.362144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.362169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.362291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.362332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.362442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.362467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.362545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.362570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.362696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.362723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.362848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.362881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.363020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.363045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.363159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.363184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.363301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.363327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.363453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.363494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.363619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.363647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.363749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.363774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.363856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.363889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.363979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.364004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.364110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.364134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.364246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.364271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.364410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.364438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.364577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.364601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.364691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.364717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.364833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.364861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.365015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.365044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.365158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.365183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.365290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.365318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.365455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.365480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.365564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.365589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.365723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.365751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.365863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.365897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.366012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.366036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.366137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.366164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.366273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.366297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.366439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.366465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.366588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.366615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.366749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.366774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.366888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.366914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.367083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.367122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.367278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.367305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.367438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.367467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.367594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.367622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.367765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.367792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.367908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.367935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.368085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.368111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.368200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.368225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.368314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.368339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.368422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.368448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.368562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.368587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.368671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.521 [2024-07-14 14:10:03.368696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.521 qpair failed and we were unable to recover it. 00:34:25.521 [2024-07-14 14:10:03.368829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.368857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.368974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.369005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.369123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.369148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.369276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.369304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.369402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.369427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.369535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.369561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.369690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.369718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.369853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.369883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.369998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.370023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.370146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.370174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.370304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.370329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.370421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.370446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.370564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.370592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.370736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.370762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.370885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.370929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.371094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.371124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.371262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.371288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.371434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.371476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.371607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.371635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.371785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.371810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.371932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.371957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.372098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.372145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.372233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.372258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.372380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.372405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.372565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.372593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.372728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.372753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.372837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.372861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.372974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.373002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.373109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.373137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.373233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.373258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.373391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.373419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.373530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.373555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.373647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.373672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.373819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.373847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.373951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.373979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.374125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.374167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.374291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.374320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.374459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.374485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.374609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.374635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.374752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.374781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.374890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.374916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.375007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.375032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.375199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.375228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.375365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.375389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.375507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.375532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.375674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.375705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.375890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.375932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.376048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.376074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.376251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.376277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.376390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.376415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.376504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.376529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.376637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.376665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.376810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.376835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.376963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.376989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.377129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.377158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.377305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.377336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.377449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.377475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.377584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.377612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.377748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.377774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.377894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.377920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.378067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.378093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.378251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.378277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.378360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.378386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.378525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.378554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.378669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.378695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.378817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.378842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.378951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.522 [2024-07-14 14:10:03.378989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.522 qpair failed and we were unable to recover it. 00:34:25.522 [2024-07-14 14:10:03.379141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.379167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.379250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.379276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.379428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.379456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.379597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.379621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.379743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.379768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.379907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.379936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.380055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.380080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.380202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.380227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.380324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.380351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.380439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.380464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.380583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.380609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.380757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.380782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.380925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.380951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.381072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.381115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.381242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.381270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.381388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.381413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.381509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.381534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.381633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.381658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.381762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.381789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.381949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.381975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.382086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.382114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.382208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.382233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.382373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.382399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.382554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.382581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.382699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.382725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.382851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.382883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.383045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.383071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.383181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.383207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.383312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.383338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.383472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.383500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.383617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.383642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.383755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.383780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.383921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.383950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.384058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.384084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.384163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.384188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.523 qpair failed and we were unable to recover it. 00:34:25.523 [2024-07-14 14:10:03.384348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.523 [2024-07-14 14:10:03.384392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.384517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.384542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.384636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.384662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.384800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.384829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.384973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.384998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.385094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.385119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.385226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.385254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.385400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.385425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.385513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.385540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.385705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.385735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.385882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.385907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.386021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.386046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.386189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.386217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.386373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.386397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.386496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.386521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.386635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.386664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.386777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.386803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.386929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.386955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.387114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.387142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.387287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.387313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.387432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.387461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.387620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.387648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.387775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.387817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.387962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.387988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.388104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.388129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.388220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.388245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.388330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.388356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.388443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.388487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.388626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.388651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.388767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.388794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.388930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.388974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.389089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.389114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.389227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.389253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.389367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.389395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.389568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.389594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.389709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.389750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.389883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.389912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.390049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.390075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.390193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.390234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.390364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.390404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.390520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.390545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.390636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.390661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.390791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.390819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.390928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.390953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.524 [2024-07-14 14:10:03.391047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.524 [2024-07-14 14:10:03.391072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.524 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.391200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.391228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.391334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.391359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.391452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.391482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.391620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.391648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.391784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.391809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.391895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.391921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.392033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.392061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.392199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.392224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.392309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.392334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.392428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.392456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.392620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.392645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.392759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.392800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.392901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.392932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.393066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.393092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.393210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.393235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.393380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.393408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.393526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.393551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.393669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.393696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.393829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.393858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.393998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.394023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.394104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.394129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.394287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.394315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.394453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.394478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.394562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.394587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.394713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.394740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.394824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.394852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.394958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.394984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.395070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.395095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.395175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.395200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.395289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.395318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.395411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.395454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.395623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.395648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.395730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.395755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.395896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.395925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.396036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.396061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.396157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.396182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.396263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.396306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.396419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.396444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.396530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.396556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.396672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.396697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.396836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.396861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.396986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.397028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.397120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.397148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.397294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.397320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.397463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.397488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.397620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.397647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.397809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.397834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.525 qpair failed and we were unable to recover it. 00:34:25.525 [2024-07-14 14:10:03.397954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.525 [2024-07-14 14:10:03.397995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.398114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.398142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.398285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.398310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.398397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.398421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.398503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.398529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.398642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.398667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.398747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.398772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.398908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.398936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.399039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.399064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.399179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.399203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.399300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.399328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.399473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.399498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.399612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.399637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.399758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.399786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.399897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.399922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.400039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.400066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.400192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.400221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.400328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.400354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.400474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.400499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.400609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.400634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.400712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.400737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.400829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.400854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.400979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.401023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.401169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.401196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.401291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.401317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.401446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.401476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.401618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.401643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.401727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.401754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.401926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.401955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.402089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.402114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.402199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.402224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.402325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.402353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.402515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.402540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.402626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.402651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.402758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.402785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.402949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.402974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.403063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.403088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.403227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.403256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.403394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.403419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.403510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.403535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.403672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.403704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.526 qpair failed and we were unable to recover it. 00:34:25.526 [2024-07-14 14:10:03.403810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.526 [2024-07-14 14:10:03.403835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.403960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.403986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.404076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.404103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.404222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.404247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.404358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.404384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.404553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.404581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.404693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.404718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.404828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.404853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.405035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.405060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.405185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.405210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.405350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.405392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.405483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.405510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.405602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.405644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.405761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.405803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.405946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.405972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.406059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.406084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.406192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.406217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.406299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.406324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.406465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.406490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.406580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.406604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.406714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.406753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.406882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.406909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.407026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.407070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.407198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.407227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.407397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.407422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.407539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.407581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.407668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.407697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.407841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.407866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.408016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.408057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.408180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.408208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.408345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.408370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.408485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.408510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.408623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.408650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.408750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.408791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.408901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.408927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.409063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.409091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.409224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.409249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.409342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.409366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.409468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.409496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.409605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.409629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.409750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.409775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.409883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.409910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.410021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.410046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.410183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.410207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.410345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.410372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.410484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.410508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.410614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.410638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.410767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.527 [2024-07-14 14:10:03.410795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.527 qpair failed and we were unable to recover it. 00:34:25.527 [2024-07-14 14:10:03.410935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.410959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.411076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.411100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.411195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.411220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.411310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.411335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.411446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.411470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.411570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.411596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.411707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.411748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.411843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.411870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.412010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.412049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.412175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.412201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.412284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.412310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.412463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.412490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.412634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.412659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.412819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.412848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.412981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.413010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.413151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.413176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.413266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.413290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.413396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.413423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.413585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.413609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.413721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.413746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.413858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.413890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.414014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.414039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.414157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.414181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.414300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.414328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.414445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.414470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.414582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.414606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.414714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.414741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.414847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.414872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.415023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.415049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.415223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.415250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.415381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.415405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.415497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.415522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.415628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.415655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.415796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.415821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.415932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.415957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.416072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.416099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.416206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.416230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.416341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.416365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.416496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.416522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.416627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.416652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.416743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.416767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.416900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.416925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.417047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.417076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.417196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.417221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.417364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.417392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.417524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.417548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.417668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.417709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.417840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.528 [2024-07-14 14:10:03.417867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.528 qpair failed and we were unable to recover it. 00:34:25.528 [2024-07-14 14:10:03.417978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.418002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.418092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.418118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.418233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.418257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.418345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.418370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.418494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.418518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.418660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.418688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.418851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.418880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.418964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.418988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.419110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.419140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.419275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.419299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.419381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.419406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.419504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.419530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.419671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.419695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.419805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.419829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.420000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.420028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.420194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.420219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.420336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.420360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.420472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.420499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.420608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.420632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.420760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.420784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.420898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.420926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.421063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.421092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.421200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.421225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.421341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.421368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.421525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.421551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.421635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.421659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.421779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.421803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.421918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.421943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.422029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.422055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.422163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.422190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.422317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.422342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.422428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.422452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.422558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.422587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.422705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.422729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.422849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.422873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.423046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.423073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.423228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.423252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.423371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.423413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.423500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.423527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.423679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.423706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.423806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.423833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.423980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.424005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.424145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.424171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.424263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.424287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.424464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.424488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.424603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.424627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.424745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.424769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.424856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.424895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.529 qpair failed and we were unable to recover it. 00:34:25.529 [2024-07-14 14:10:03.424984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.529 [2024-07-14 14:10:03.425012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.425163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.425189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.425298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.425324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.425438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.425463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.425585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.425610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.425742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.425768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.425915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.425941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.426065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.426089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.426224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.426251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.426415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.426440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.426545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.426570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.426726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.426754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.426910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.426936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.427050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.427074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.427237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.427280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.427424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.427451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.427573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.427599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.427692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.427718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.427836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.427863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.427989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.428033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.428163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.428192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.428303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.428328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.428418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.428443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.428535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.428559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.428647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.428673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.428788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.428812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.428939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.428967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.429076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.429104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.429196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.429221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.429368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.429395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.429539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.429565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.429678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.429703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.429843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.429873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.430008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.430034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.430151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.430177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.430316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.430344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.430486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.430511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.430602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.430628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.430743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.430770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.430886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.430911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.431028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.431052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.431147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.431171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.431286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.530 [2024-07-14 14:10:03.431310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.530 qpair failed and we were unable to recover it. 00:34:25.530 [2024-07-14 14:10:03.431425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.431449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.431548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.431575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.431683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.431707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.431818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.431842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.431983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.432012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.432107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.432147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.432230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.432256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.432347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.432387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.432502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.432527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.432642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.432666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.432792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.432818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.432921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.432948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.433070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.433094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.433182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.433209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.433302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.433327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.433419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.433443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.433575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.433603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.433713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.433737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.433830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.433854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.434000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.434028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.434141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.434165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.434252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.434276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.434392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.434417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.434531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.434556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.434635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.434659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.434798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.434829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.434953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.434979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.435123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.435166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.435308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.435333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.435442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.435467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.435608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.435634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.435743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.435771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.435882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.435907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.435994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.436019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.436110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.436134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.436258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.436286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.436387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.436413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.436535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.436563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.436666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.436693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.436791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.436819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.436947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.436974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.437066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.437090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.437188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.437213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.437350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.437378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.437490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.437533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.437660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.437689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.437788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.437816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.437975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.531 [2024-07-14 14:10:03.438014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.531 qpair failed and we were unable to recover it. 00:34:25.531 [2024-07-14 14:10:03.438141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.438168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.438287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.438331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.438473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.438518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.438630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.438674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.438824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.438851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.438943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.438970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.439064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.439089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.439228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.439256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.439381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.439409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.439512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.439539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.439690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.439721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.439831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.439857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.439961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.439987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.440134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.440178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.440289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.440333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.440472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.440515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.440599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.440625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.440739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.440770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.440914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.440940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.441031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.441058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.441154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.441192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.441283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.441308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.441431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.441456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.441539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.441564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.441683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.441709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.441793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.441818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.441917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.441945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.442041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.442067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.442197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.442225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.442318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.442345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.442496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.442524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.442708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.442754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.442886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.442914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.443005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.443029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.443115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.443139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.443244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.443284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.443392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.443420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.443518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.443545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.443668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.443696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.443794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.443820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.443919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.443962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.444102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.444129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.444223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.444250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.444338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.444365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.444494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.444528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.444631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.444658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.444746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.444773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.532 [2024-07-14 14:10:03.444928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.532 [2024-07-14 14:10:03.444967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.532 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.445089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.445116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.445232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.445261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.445386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.445412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.445508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.445533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.445619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.445644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.445736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.445763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.445883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.445909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.446025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.446050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.446139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.446163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.446279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.446323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.446447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.446475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.446590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.446617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.446711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.446738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.446826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.446854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.446967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.446995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.447097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.447126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.447259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.447288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.447471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.447517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.447611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.447638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.447728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.447754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.447853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.447888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.447985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.448010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.448106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.448131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.448243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.448287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.448379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.448405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.448490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.448516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.448597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.448622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.448714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.448739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.448830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.448855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.448978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.449005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.449105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.449143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.449278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.449305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.449401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.449427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.449518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.449545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.449685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.449723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.449817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.449844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.449961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.449993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.450088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.450114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.450211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.450236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.450374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.450401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.450522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.450550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.533 [2024-07-14 14:10:03.450650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.533 [2024-07-14 14:10:03.450677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.533 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.450779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.450804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.450889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.450914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.451008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.451032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.451138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.451166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.451262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.451294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.451398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.451426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.451524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.451552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.451643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.451671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.451844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.451901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.452021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.452067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.452201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.452229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.452349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.452377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.452507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.452536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.452662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.452690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.452818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.452847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.452967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.452996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.453132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.453176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.453287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.453315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.453422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.453448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.453570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.453596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.453685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.453710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.453828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.453859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.453959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.453984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.454073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.454098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.454197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.454225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.454341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.454391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.454479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.454507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.454644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.454674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.454791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.454816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.454969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.455012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.455146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.455175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.455268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.455309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.455443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.455471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.455587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.455615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.455708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.455735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.455885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.455913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.456038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.456067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.456185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.456211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.456304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.456329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.456442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.456472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.456597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.456625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.456720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.456750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.456889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.456917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.457049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.457075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.457181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.457209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.457391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.457436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.457535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.534 [2024-07-14 14:10:03.457563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.534 qpair failed and we were unable to recover it. 00:34:25.534 [2024-07-14 14:10:03.457703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.457729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.457842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.457905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.458003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.458029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.458146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.458172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.458260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.458287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.458380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.458405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.458502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.458540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.458665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.458693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.458785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.458810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.458903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.458929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.459051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.459077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.459198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.459223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.459341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.459367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.459452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.459478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.459566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.459592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.459684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.459717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.459811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.459837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.459947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.459977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.460131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.460175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.460282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.460310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.460441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.460467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.460607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.460633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.460725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.460750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.460838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.460863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.460957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.460983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.461080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.461118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.461208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.461235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.461327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.461355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.461443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.461470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.461583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.461609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.461734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.461761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.461852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.461882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.461996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.462022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.462108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.462133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.462247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.462272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.462356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.462382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.462476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.462503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.462601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.462641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.462737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.462763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.462869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.462903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.463025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.463050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.463165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.463197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.463333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.463379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.535 [2024-07-14 14:10:03.463514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.535 [2024-07-14 14:10:03.463559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.535 qpair failed and we were unable to recover it. 00:34:25.536 [2024-07-14 14:10:03.463644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.536 [2024-07-14 14:10:03.463670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.536 qpair failed and we were unable to recover it. 00:34:25.536 [2024-07-14 14:10:03.463756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.536 [2024-07-14 14:10:03.463782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.536 qpair failed and we were unable to recover it. 00:34:25.536 [2024-07-14 14:10:03.463864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.536 [2024-07-14 14:10:03.463900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.536 qpair failed and we were unable to recover it. 00:34:25.536 [2024-07-14 14:10:03.464030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.536 [2024-07-14 14:10:03.464059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.536 qpair failed and we were unable to recover it. 00:34:25.536 [2024-07-14 14:10:03.464192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.536 [2024-07-14 14:10:03.464218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.536 qpair failed and we were unable to recover it. 00:34:25.536 [2024-07-14 14:10:03.464301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.536 [2024-07-14 14:10:03.464326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.536 qpair failed and we were unable to recover it. 00:34:25.536 [2024-07-14 14:10:03.464417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.821 [2024-07-14 14:10:03.464444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.821 qpair failed and we were unable to recover it. 00:34:25.821 [2024-07-14 14:10:03.464544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.821 [2024-07-14 14:10:03.464572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.821 qpair failed and we were unable to recover it. 00:34:25.821 [2024-07-14 14:10:03.464691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.821 [2024-07-14 14:10:03.464717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.821 qpair failed and we were unable to recover it. 00:34:25.821 [2024-07-14 14:10:03.464806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.821 [2024-07-14 14:10:03.464833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.821 qpair failed and we were unable to recover it. 00:34:25.821 [2024-07-14 14:10:03.464935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.821 [2024-07-14 14:10:03.464963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.821 qpair failed and we were unable to recover it. 00:34:25.821 [2024-07-14 14:10:03.465063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.821 [2024-07-14 14:10:03.465090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.821 qpair failed and we were unable to recover it. 00:34:25.821 [2024-07-14 14:10:03.465177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.821 [2024-07-14 14:10:03.465203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.821 qpair failed and we were unable to recover it. 00:34:25.821 [2024-07-14 14:10:03.465290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.821 [2024-07-14 14:10:03.465317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.821 qpair failed and we were unable to recover it. 00:34:25.821 [2024-07-14 14:10:03.465459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.821 [2024-07-14 14:10:03.465485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.821 qpair failed and we were unable to recover it. 00:34:25.821 [2024-07-14 14:10:03.465599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.821 [2024-07-14 14:10:03.465625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.821 qpair failed and we were unable to recover it. 00:34:25.821 [2024-07-14 14:10:03.465740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.821 [2024-07-14 14:10:03.465766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.821 qpair failed and we were unable to recover it. 00:34:25.821 [2024-07-14 14:10:03.465873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.465920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.466043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.466070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.466207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.466235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.466356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.466384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.466517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.466545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.466696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.466726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.466834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.466861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.466991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.467018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.467108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.467135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.467263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.467306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.467409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.467438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.467535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.467565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.467703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.467762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.467891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.467919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.468015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.468040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.468146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.468191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.468283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.468310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.468400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.468425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.468542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.468569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.468654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.468679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.468796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.468820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.468913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.468939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.469026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.469051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.469139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.469164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.469255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.469282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.469368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.469394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.469478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.469504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.469588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.469614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.469706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.469731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.469818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.469846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.469966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.469992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.470077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.470102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.470209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.470238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.470333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.470362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.470477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.470505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.470602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.470642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.470752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.470779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.470890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.470933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.471022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.471046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.822 qpair failed and we were unable to recover it. 00:34:25.822 [2024-07-14 14:10:03.471138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.822 [2024-07-14 14:10:03.471164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.471262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.471289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.471392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.471416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.471534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.471562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.471669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.471710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.471794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.471818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.471911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.471940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.472031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.472057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.472171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.472196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.472282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.472308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.472463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.472506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.472684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.472713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.472809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.472838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.472986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.473011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.473097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.473121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.473238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.473263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.473366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.473394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.473510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.473550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.473648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.473675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.473765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.473793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.473917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.473956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.474046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.474074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.474222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.474248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.474355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.474384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.474493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.474536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.474626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.474656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.474776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.474803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.474906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.474931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.475014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.475038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.475118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.475142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.475278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.475305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.475403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.475431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.475560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.475588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.475712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.475738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.475843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.475869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.823 [2024-07-14 14:10:03.475984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.823 [2024-07-14 14:10:03.476014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.823 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.476103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.476127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.476266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.476290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.476399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.476429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.476547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.476589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.476717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.476746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.476841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.476870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.477012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.477038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.477131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.477156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.477264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.477294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.477407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.477448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.477570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.477597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.477725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.477751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.477884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.477923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.478032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.478076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.478208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.478251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.478405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.478434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.478546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.478590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.478715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.478744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.478839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.478868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.479031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.479070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.479232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.479262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.479419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.479462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.479596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.479640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.479784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.479809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.479930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.479956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.480093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.480138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.480279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.480310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.480426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.480469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.480580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.480633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.480783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.480811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.480904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.480946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.481043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.481070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.481190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.481237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.481370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.481397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.481558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.481611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.481702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.481727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.481859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.481905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.482066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.482093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.824 qpair failed and we were unable to recover it. 00:34:25.824 [2024-07-14 14:10:03.482204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.824 [2024-07-14 14:10:03.482254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.482399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.482443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.482557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.482586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.482689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.482717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.482886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.482912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.483002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.483026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.483110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.483134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.483252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.483276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.483417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.483444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.483540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.483568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.483695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.483734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.483837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.483864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.483970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.483996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.484112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.484138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.484238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.484266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.484363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.484398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.484497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.484540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.484687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.484714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.484807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.484833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.484946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.484971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.485054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.485079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.485189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.485214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.485323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.485351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.485496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.485523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.485623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.485649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.485785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.485812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.485913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.485938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.486055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.486079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.486215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.486242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.486436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.486464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.486564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.486591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.486687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.486715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.486846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.486871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.486971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.486995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.487089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.487117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.487227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.487256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.487369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.487398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.487554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.487583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.487672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.487701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.487828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.487858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.488007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.825 [2024-07-14 14:10:03.488034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.825 qpair failed and we were unable to recover it. 00:34:25.825 [2024-07-14 14:10:03.488154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.488180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.488272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.488302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.488407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.488437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.488547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.488571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.488709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.488736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.488858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.488900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.489004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.489028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.489119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.489143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.489273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.489300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.489413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.489454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.489579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.489606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.489707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.489735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.489841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.489865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.489967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.489992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.490076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.490100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.490217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.490242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.490336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.490360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.490467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.490494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.490596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.490622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.490723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.490765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.490886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.490912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.491002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.491026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.491110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.491134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.491255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.491298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.491421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.491448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.491575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.491602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.491693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.491720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.491883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.491922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.492021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.492055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.492187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.492230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.492332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.492361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.492494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.492521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.492646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.492672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.492799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.826 [2024-07-14 14:10:03.492827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.826 qpair failed and we were unable to recover it. 00:34:25.826 [2024-07-14 14:10:03.492962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.492987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.493073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.493098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.493204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.493231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.493360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.493388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.493515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.493542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.493676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.493709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.493806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.493836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.493952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.493980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.494130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.494156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.494257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.494286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.494382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.494408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.494532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.494575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.494722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.494747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.494863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.494898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.495018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.495043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.495124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.495150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.495230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.495256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.495350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.495376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.495473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.495498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.495589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.495614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.495700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.495725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.495840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.495868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.495975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.495999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.496116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.496140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.496230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.496255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.496336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.496360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.496474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.496519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.496627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.496671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.496763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.496789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.496874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.496905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.497027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.497054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.497145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.497171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.497262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.497288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.497367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.497393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.497519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.497558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.497687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.497714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.497808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.497833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.497929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.497954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.498097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.498122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.498215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.498239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.498347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.827 [2024-07-14 14:10:03.498373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.827 qpair failed and we were unable to recover it. 00:34:25.827 [2024-07-14 14:10:03.498461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.498487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.498575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.498601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.498717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.498742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.498824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.498849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.498936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.498962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.499054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.499079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.499191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.499217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.499327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.499366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.499464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.499490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.499576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.499601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.499714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.499740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.499850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.499875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.499970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.499996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.500087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.500114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.500228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.500271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.500430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.500473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.500565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.500592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.500708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.500734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.500832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.500871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.500984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.501010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.501098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.501126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.501255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.501280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.501396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.501423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.501543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.501570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.501729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.501768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.501872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.501919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.502045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.502072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.502183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.502213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.502337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.502366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.502465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.502495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.502625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.502654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.502796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.502825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.502924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.502951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.503086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.503130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.503226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.503252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.503426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.503472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.503565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.503592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.503685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.503712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.503833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.503857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.503978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.504012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.504112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.828 [2024-07-14 14:10:03.504157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.828 qpair failed and we were unable to recover it. 00:34:25.828 [2024-07-14 14:10:03.504326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.504375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.504524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.504574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.504713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.504739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.504861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.504896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.504989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.505015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.505152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.505181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.505281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.505316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.505439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.505468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.505650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.505696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.505788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.505813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.505927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.505953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.506041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.506067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.506177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.506220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.506353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.506395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.506538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.506566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.506682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.506708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.506802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.506829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.506922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.506949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.507064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.507090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.507180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.507207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.507348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.507378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.507503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.507532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.507630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.507659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.507789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.507815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.507919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.507945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.508053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.508082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.508186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.508212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.508325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.508351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.508435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.508461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.508582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.508607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.508728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.508754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.508899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.508937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.509043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.509068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.509189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.509213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.509331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.509356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.509460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.509487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.509574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.509601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.509723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.509748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.509838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.509862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.509968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.510006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.829 [2024-07-14 14:10:03.510122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.829 [2024-07-14 14:10:03.510167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.829 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.510319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.510352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.510479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.510507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.510606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.510635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.510769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.510795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.510914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.510940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.511055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.511084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.511187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.511214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.511306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.511333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.511458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.511486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.511590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.511617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.511733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.511764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.511865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.511917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.512030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.512055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.512141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.512167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.512248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.512275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.512388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.512417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.512506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.512533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.512632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.512660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.512777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.512804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.512930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.512958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.513045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.513071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.513179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.513222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.513313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.513340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.513505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.513533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.513645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.513673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.513768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.513795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.513915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.513940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.514031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.514055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.514141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.514165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.514262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.514287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.514398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.514443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.514546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.514576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.514677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.514724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.514839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.514865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.514994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.515020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.830 qpair failed and we were unable to recover it. 00:34:25.830 [2024-07-14 14:10:03.515107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.830 [2024-07-14 14:10:03.515150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.515280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.515309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.515412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.515439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.515563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.515590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.515682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.515709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.515849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.515873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.515996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.516021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.516107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.516135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.516254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.516296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.516448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.516476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.516594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.516623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.516729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.516758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.516894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.516937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.517033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.517059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.517151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.517177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.517270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.517311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.517404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.517431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.517575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.517603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.517728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.517754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.517887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.517934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.518022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.518048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.518170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.518196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.518335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.518363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.518466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.518494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.518615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.518649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.518796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.518825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.518942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.518968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.519079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.519104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.519191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.519216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.519335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.519362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.519483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.519511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.519603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.519629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.519783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.519822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.519949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.519976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.520099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.520127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.520221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.520247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.520360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.520388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.520513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.520543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.520680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.520710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.520814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.520855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.520961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.831 [2024-07-14 14:10:03.521000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.831 qpair failed and we were unable to recover it. 00:34:25.831 [2024-07-14 14:10:03.521129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.521175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.521311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.521340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.521452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.521479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.521623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.521654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.521751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.521779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.521921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.521947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.522043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.522070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.522154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.522180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.522264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.522290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.522404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.522430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.522582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.522608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.522721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.522759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.522848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.522881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.523004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.523030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.523172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.523218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.523337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.523362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.523477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.523503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.523629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.523656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.523760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.523788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.523873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.523905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.524019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.524044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.524155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.524180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.524294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.524319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.524404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.524434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.524526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.524552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.524663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.524688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.524801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.524828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.524952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.524978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.525083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.525113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.525263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.525307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.525419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.525463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.525555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.525580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.525676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.525702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.525789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.525814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.525906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.525932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.526055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.526080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.526170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.526194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.526287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.832 [2024-07-14 14:10:03.526312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.832 qpair failed and we were unable to recover it. 00:34:25.832 [2024-07-14 14:10:03.526403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.526428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.526555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.526594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.526691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.526719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.526809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.526835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.526961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.526987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.527080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.527105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.527271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.527316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.527430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.527456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.527570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.527597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.527719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.527747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.527841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.527869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.527966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.527992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.528103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.528132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.528217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.528243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.528337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.528362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.528493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.528521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.528628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.528672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.528780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.528810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.528932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.528960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.529074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.529117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.529265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.529309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.529395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.529420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.529558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.529587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.529679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.529707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.529810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.529838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.529948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.529974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.530065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.530090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.530187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.530214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.530313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.530338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.530472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.530500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.530599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.530626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.530726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.530750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.530838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.530862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.530959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.530984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.531089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.531117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.531246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.531274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.531372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.531400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.531495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.531523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.531627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.531652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.531770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.531802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.531912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.833 [2024-07-14 14:10:03.531938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.833 qpair failed and we were unable to recover it. 00:34:25.833 [2024-07-14 14:10:03.532033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.532058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.532161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.532190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.532346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.532374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.532465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.532493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.532596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.532624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.532733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.532759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.532848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.532873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.532979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.533005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.533088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.533113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.533199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.533223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.533323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.533351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.533460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.533486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.533601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.533629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.533717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.533744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.533894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.533920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.534007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.534032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.534137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.534195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.534398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.534428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.534526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.534555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.534707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.534736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.534890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.534929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.535030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.535056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.535160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.535188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.535344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.535372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.535471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.535499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.535612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.535644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.535771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.535799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.535939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.535965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.536056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.536081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.536221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.536246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.536380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.536408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.536563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.536591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.536691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.536719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.536820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.536847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.537010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.537049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.834 [2024-07-14 14:10:03.537158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.834 [2024-07-14 14:10:03.537197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.834 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.537303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.537330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.537481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.537510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.537602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.537631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.537751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.537778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.537906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.537932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.538023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.538049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.538143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.538167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.538273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.538301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.538412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.538437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.538582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.538610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.538735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.538763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.538866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.538897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.538977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.539001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.539133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.539176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.539294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.539335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.539419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.539447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.539549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.539581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.539703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.539742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.539896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.539924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.540014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.540040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.540129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.540155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.540273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.540317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.540432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.540461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.540573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.540599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.540698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.540723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.540815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.540840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.540946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.540972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.541059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.541085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.541202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.541227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.541309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.541334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.541446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.541471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.541574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.541614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.541706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.541733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.541869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.541905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.542047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.542076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.542193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.542237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.542328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.542354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.542447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.542472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.542559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.542585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.542678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.542705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.835 [2024-07-14 14:10:03.542795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.835 [2024-07-14 14:10:03.542822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.835 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.542949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.542978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.543064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.543091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.543189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.543221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.543370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.543395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.543520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.543546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.543654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.543683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.543825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.543863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.543966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.543993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.544112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.544138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.544231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.544257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.544406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.544434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.544538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.544562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.544665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.544693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.544814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.544842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.544995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.545034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.545140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.545167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.545286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.545331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.545467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.545514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.545658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.545689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.545793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.545822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.545965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.545992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.546125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.546153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.546250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.546278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.546372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.546399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.546541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.546569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.546659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.546686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.546798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.546823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.546907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.546933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.547027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.547052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.547139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.547187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.547308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.547336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.547494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.547521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.547614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.547642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.547760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.547787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.547887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.547930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.548058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.548097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.548231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.548270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.548410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.548455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.548556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.548585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.548747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.548773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.836 [2024-07-14 14:10:03.548858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.836 [2024-07-14 14:10:03.548895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.836 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.548980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.549005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.549141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.549185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.549331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.549374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.549489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.549517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.549612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.549638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.549727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.549752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.549838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.549863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.549967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.549992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.550079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.550105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.550212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.550239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.550336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.550364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.550489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.550517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.550634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.550679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.550826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.550852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.550944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.550970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.551102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.551150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.551314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.551343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.551441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.551467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.551556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.551601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.551706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.551733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.551863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.551899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.552027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.552055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.552177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.552205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.552322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.552371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.552478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.552523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.552666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.552722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.552843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.552871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.552979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.553005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.553107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.553136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.553233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.553262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.553383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.553412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.553567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.553613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.553728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.553753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.553898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.553924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.554031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.554060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.554200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.554228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.554351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.554381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.554484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.554513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.554631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.554659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.554792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.554817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.837 qpair failed and we were unable to recover it. 00:34:25.837 [2024-07-14 14:10:03.554902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.837 [2024-07-14 14:10:03.554928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.555019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.555044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.555178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.555223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.555355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.555400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.555513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.555541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.555679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.555704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.555794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.555820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.555931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.555960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.556078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.556123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.556221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.556247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.556359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.556385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.556475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.556501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.556604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.556644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.556747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.556775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.556896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.556923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.557011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.557042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.557158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.557184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.557328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.557353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.557444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.557470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.557550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.557575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.557684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.557713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.557816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.557845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.557989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.558016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.558149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.558178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.558269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.558298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.558395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.558437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.558571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.558600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.558740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.558783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.558936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.558963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.559058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.559085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.559190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.559218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.559372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.559420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.559507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.559534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.559637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.559665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.559782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.559808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.559901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.559928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.560015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.560041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.560135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.560160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.560303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.560329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.560443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.560472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.560626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.838 [2024-07-14 14:10:03.560655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.838 qpair failed and we were unable to recover it. 00:34:25.838 [2024-07-14 14:10:03.560776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.560819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.560941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.560969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.561058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.561084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.561183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.561212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.561376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.561402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.561510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.561554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.561680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.561706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.561824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.561852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.561988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.562026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.562126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.562171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.562287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.562314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.562437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.562474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.562603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.562634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.562744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.562769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.562854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.562885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.562989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.563015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.563143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.563182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.563311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.563352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.563512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.563562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.563716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.563769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.563914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.563940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.564081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.564106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.564203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.564251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.564429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.564476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.564571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.564599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.564696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.564723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.564847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.564871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.564979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.565004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.565093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.565122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.565212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.565238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.565322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.565348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.565478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.565519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.565656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.565685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.565782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.565812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.565932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.839 [2024-07-14 14:10:03.565960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.839 qpair failed and we were unable to recover it. 00:34:25.839 [2024-07-14 14:10:03.566052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.566077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.566167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.566192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.566329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.566356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.566450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.566478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.566585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.566613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.566711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.566739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.566890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.566935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.567030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.567057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.567230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.567277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.567389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.567433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.567576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.567620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.567729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.567754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.567846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.567873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.567995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.568023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.568124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.568155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.568249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.568278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.568372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.568401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.568521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.568575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.568700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.568727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.568838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.568866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.569004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.569033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.569176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.569201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.569317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.569343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.569430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.569455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.569542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.569567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.569690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.569716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.569808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.569834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.569927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.569955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.570069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.570095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.570179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.570204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.570292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.570318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.570405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.570432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.570556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.570583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.570703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.570733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.570861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.570894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.570984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.571009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.571121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.571149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.571273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.571300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.571397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.571425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.571545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.571573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.840 qpair failed and we were unable to recover it. 00:34:25.840 [2024-07-14 14:10:03.571664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.840 [2024-07-14 14:10:03.571692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.571827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.571857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.571980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.572008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.572100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.572125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.572261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.572304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.572407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.572435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.572613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.572655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.572753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.572779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.572894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.572920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.573012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.573037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.573122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.573146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.573232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.573258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.573343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.573369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.573451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.573476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.573567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.573606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.573708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.573735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.573819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.573846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.573949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.573976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.574067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.574093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.574188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.574214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.574339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.574366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.574470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.574497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.574598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.574623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.574707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.574732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.574853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.574883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.574993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.575020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.575154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.575178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.575296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.575320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.575410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.575434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.575548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.575575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.575658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.575684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.575773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.575798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.575894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.575920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.576040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.576070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.576157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.576183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.576304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.576331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.576447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.576472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.576553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.576579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.576666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.841 [2024-07-14 14:10:03.576691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.841 qpair failed and we were unable to recover it. 00:34:25.841 [2024-07-14 14:10:03.576811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.576837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.576940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.576966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.577056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.577081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.577171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.577196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.577285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.577311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.577402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.577426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.577519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.577545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.577624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.577648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.577767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.577792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.577931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.577956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.578075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.578100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.578191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.578217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.578331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.578357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.578438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.578463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.578559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.578588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.578678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.578704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.578836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.578874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.579010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.579037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.579155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.579181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.579269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.579294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.579404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.579449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.579556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.579596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.579733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.579759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.579848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.579889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.580044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.580072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.580190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.580240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.580396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.580451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.580631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.580685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.580777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.580805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.580921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.580960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.581059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.581085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.581213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.581241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.581359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.581387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.581519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.581546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.581661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.581706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.581814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.581839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.581938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.581963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.582123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.582166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.582256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.582280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.582422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.582465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.582576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.842 [2024-07-14 14:10:03.582606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.842 qpair failed and we were unable to recover it. 00:34:25.842 [2024-07-14 14:10:03.582726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.582769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.582908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.582952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.583093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.583122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.583220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.583248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.583395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.583448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.583599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.583654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.583785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.583824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.583964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.583993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.584092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.584135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.584228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.584258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.584357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.584386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.584544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.584597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.584708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.584736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.584863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.584909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.585009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.585036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.585123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.585167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.585340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.585387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.585510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.585563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.585717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.585746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.585889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.585927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.586065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.586111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.586205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.586248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.586342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.586370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.586480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.586505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.586623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.586651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.586778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.586806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.586930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.586969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.587074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.587101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.587213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.587259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.587432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.587480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.587617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.587645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.587775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.587799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.587887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.587915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.588055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.588081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.588249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.588277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.843 qpair failed and we were unable to recover it. 00:34:25.843 [2024-07-14 14:10:03.588374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.843 [2024-07-14 14:10:03.588401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.588514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.588539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.588646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.588677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.588787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.588813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.588901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.588927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.589023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.589050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.589159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.589188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.589286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.589316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.589410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.589439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.589569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.589598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.589699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.589742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.589849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.589884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.590023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.590062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.590208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.590252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.590361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.590404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.590505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.590548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.590634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.590659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.590782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.590807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.590901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.590927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.591014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.591039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.591124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.591149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.591256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.591282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.591397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.591422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.591512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.591538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.591652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.591677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.591773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.591800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.591916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.591942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.592059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.592085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.592170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.592196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.592315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.592341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.592443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.592480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.592575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.592601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.592733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.592772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.592938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.592969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.593112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.593141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.593282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.593328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.593469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.593494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.593637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.593663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.593776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.593801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.593905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.593945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.594047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.844 [2024-07-14 14:10:03.594074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.844 qpair failed and we were unable to recover it. 00:34:25.844 [2024-07-14 14:10:03.594169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.594196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.594275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.594302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.594420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.594447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.594573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.594599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.594718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.594744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.594833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.594858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.594949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.594975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.595108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.595138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.595242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.595270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.595398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.595427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.595610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.595656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.595778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.595808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.595904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.595931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.596035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.596063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.596196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.596241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.596373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.596417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.596519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.596547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.596708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.596733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.596853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.596884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.596979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.597003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.597114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.597138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.597258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.597283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.597370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.597394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.597509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.597535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.597618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.597643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.597755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.597794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.597893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.597921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.598006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.598033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.598120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.598146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.598257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.598282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.598366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.598392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.598529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.598574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.598666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.598690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.598791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.598829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.598956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.598984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.599072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.599097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.599220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.599248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.599367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.599415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.599548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.599576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.599698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.599725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.599868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.845 [2024-07-14 14:10:03.599906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.845 qpair failed and we were unable to recover it. 00:34:25.845 [2024-07-14 14:10:03.600038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.600064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.600208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.600235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.600346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.600389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.600515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.600545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.600675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.600705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.600801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.600844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.600932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.600958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.601050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.601075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.601192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.601217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.601326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.601354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.601483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.601530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.601621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.601650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.601780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.601807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.601939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.601978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.602107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.602133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.602239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.602267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.602445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.602488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.602620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.602665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.602783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.602807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.602904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.602950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.603104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.603152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.603280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.603308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.603434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.603461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.603590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.603618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.603780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.603808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.603915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.603943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.604074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.604102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.604236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.604277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.604400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.604426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.604514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.604539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.604636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.604661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.604776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.604803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.604894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.604923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.605068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.605093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.605210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.605236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.605341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.605370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.605498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.605527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.605655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.605691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.605790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.605819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.605962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.605987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.846 [2024-07-14 14:10:03.606081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.846 [2024-07-14 14:10:03.606106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.846 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.606225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.606250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.606362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.606389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.606519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.606561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.606690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.606721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.606824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.606853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.606972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.606998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.607117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.607143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.607282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.607311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.607427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.607456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.607614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.607643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.607777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.607805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.607931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.607970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.608087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.608113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.608277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.608321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.608422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.608466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.608655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.608708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.608820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.608844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.608994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.609019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.609170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.609216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.609357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.609400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.609512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.609564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.609693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.609722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.609816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.609843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.609967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.610010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.610148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.610177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.610270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.610298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.610427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.610456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.610601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.610648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.610772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.610800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.610941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.610969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.611112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.611158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.611294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.611338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.611478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.611520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.611621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.611649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.611784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.611810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.847 [2024-07-14 14:10:03.611899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.847 [2024-07-14 14:10:03.611926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.847 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.612032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.612075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.612201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.612227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.612368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.612395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.612522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.612550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.612666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.612692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.612788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.612814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.612948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.612974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.613109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.613138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.613276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.613304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.613427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.613456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.613552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.613595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.613709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.613734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.613854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.613885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.613998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.614025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.614138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.614167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.614296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.614340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.614470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.614498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.614654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.614683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.614769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.614798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.614893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.614937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.615017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.615043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.615150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.615176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.615312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.615341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.615493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.615523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.615650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.615679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.615786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.615816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.615952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.615979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.616090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.616120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.616226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.616255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.616416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.616444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.616562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.616591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.616686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.616715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.616866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.616912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.617006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.617032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.617182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.617224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.617374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.617423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.617610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.617658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.617768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.617795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.617911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.617937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.618056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.848 [2024-07-14 14:10:03.618083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.848 qpair failed and we were unable to recover it. 00:34:25.848 [2024-07-14 14:10:03.618175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.618202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.618320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.618346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.618494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.618523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.618655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.618685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.618815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.618847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.619009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.619047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.619168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.619195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.619310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.619351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.619500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.619544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.619698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.619733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.619925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.619951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.620074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.620102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.620243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.620268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.620361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.620387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.620559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.620608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.620763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.620791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.620930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.620956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.621046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.621070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.621188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.621214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.621304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.621329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.621453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.621491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.621660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.621688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.621808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.621835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.621968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.622007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.622131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.622157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.622285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.622313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.622440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.622469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.622598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.622626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.622760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.622788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.622924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.622967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.623075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.623114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.623254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.623298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.623426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.623454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.623611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.623658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.623778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.623803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.623919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.623944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.624036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.624061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.624175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.624200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.624295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.624320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.624442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.849 [2024-07-14 14:10:03.624468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.849 qpair failed and we were unable to recover it. 00:34:25.849 [2024-07-14 14:10:03.624548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.624572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.624718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.624743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.624886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.624913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.625000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.625024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.625158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.625196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.625292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.625320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.625467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.625492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.625612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.625637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.625740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.625778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.625882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.625909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.626015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.626044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.626167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.626195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.626381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.626429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.626575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.626613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.626751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.626781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.626890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.626946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.627091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.627122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.627265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.627302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.627470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.627518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.627648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.627678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.627804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.627833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.627984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.628011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.628155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.628181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.628281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.628310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.628438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.628479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.628619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.628648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.628749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.628777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.628910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.628936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.629039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.629065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.629175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.629205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.629330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.629358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.629483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.629513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.629674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.629705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.629844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.629869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.629991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.630016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.630106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.630131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.630267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.630295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.630417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.630445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.630591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.630640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.630775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.850 [2024-07-14 14:10:03.630800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.850 qpair failed and we were unable to recover it. 00:34:25.850 [2024-07-14 14:10:03.630915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.630941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.631060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.631086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.631231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.631288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.631429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.631474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.631643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.631674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.631773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.631803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.631943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.631970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.632087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.632114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.632235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.632278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.632396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.632449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.632614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.632643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.632775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.632806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.632920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.632947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.633069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.633094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.633266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.633322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.633466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.633519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.633670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.633713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.633861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.633897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.634029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.634054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.634169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.634194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.634313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.634338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.634481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.634506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.634629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.634670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.634803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.634828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.634945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.634969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.635086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.635111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.635207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.635233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.635396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.635424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.635617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.635645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.635773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.635801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.635918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.635943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.636057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.636082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.636201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.636225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.636348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.636376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.636500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.636529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.636625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.636653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.636786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.636814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.636978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.637017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.637143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.637169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.637305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.637349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.637479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.851 [2024-07-14 14:10:03.637522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.851 qpair failed and we were unable to recover it. 00:34:25.851 [2024-07-14 14:10:03.637677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.637721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.637861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.637899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.638045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.638072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.638263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.638318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.638433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.638486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.638676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.638726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.638856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.638887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.638999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.639025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.639115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.639158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.639352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.639381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.639537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.639586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.639714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.639744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.639869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.639917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.640044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.640075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.640171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.640199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.640318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.640345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.640502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.640530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.640633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.640674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.640825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.640853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.640984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.641010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.641126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.641153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.641233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.641259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.641362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.641391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.641544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.641572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.641659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.641687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.641809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.641838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.641968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.642007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.642157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.642186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.642364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.642407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.642536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.642579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.642665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.642692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.642840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.642866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.642968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.642993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.643124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.643152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.643280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.643308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.852 qpair failed and we were unable to recover it. 00:34:25.852 [2024-07-14 14:10:03.643430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.852 [2024-07-14 14:10:03.643459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.643594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.643622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.643749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.643777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.643946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.643986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.644112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.644138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.644296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.644338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.644472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.644503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.644640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.644668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.644794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.644819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.644937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.644964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.645056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.645081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.645167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.645192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.645277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.645302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.645411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.645438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.645599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.645627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.645751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.645781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.645874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.645923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.646042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.646067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.646181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.646205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.646409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.646463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.646617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.646665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.646817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.646846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.646988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.647014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.647108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.647133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.647212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.647238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.647322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.647365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.647472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.647496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.647651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.647694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.647835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.647885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.648002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.648029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.648140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.648170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.648329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.648382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.648545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.648592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.648713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.648742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.648872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.648926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.649048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.649074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.649190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.649215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.649332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.649360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.649478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.649526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.649668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.649723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.649852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.853 [2024-07-14 14:10:03.649891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.853 qpair failed and we were unable to recover it. 00:34:25.853 [2024-07-14 14:10:03.650052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.650077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.650198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.650244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.650402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.650430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.650539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.650563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.650669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.650702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.650833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.650861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.651019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.651058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.651192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.651219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.651316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.651357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.651518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.651566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.651691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.651718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.651821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.651849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.651996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.652021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.652131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.652156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.652263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.652291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.652412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.652439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.652564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.652591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.652720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.652747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.652887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.652931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.653060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.653089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.653212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.653256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.653423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.653453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.653558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.653587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.653713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.653755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.653905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.653932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.654014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.654040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.654155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.654181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.654271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.654297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.654382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.654408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.654519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.654547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.654721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.654763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.654921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.654949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.655070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.655098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.655236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.655279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.655439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.655483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.655610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.655657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.655773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.655800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.655936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.655965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.656066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.656093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.656202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.656244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.854 [2024-07-14 14:10:03.656353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.854 [2024-07-14 14:10:03.656381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.854 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.656565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.656614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.656767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.656794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.656888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.656917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.657033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.657064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.657157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.657185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.657328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.657372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.657509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.657552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.657706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.657759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.657845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.657870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.657994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.658038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.658154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.658183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.658336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.658361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.658504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.658530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.658648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.658676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.658834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.658873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.658986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.659024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.659161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.659190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.659380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.659408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.659567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.659619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.659716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.659745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.659923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.659948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.660095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.660120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.660260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.660310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.660474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.660522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.660625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.660653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.660803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.660831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.660973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.660998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.661118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.661143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.661306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.661334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.661428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.661456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.661578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.661611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.661712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.661739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.661861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.661895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.662005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.662030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.662157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.662182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.662272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.662297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.662409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.662436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.662547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.662572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.662727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.662770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.855 [2024-07-14 14:10:03.662929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.855 [2024-07-14 14:10:03.662968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.855 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.663074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.663113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.663291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.663338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.663498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.663527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.663675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.663721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.663850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.663881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.664000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.664025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.664146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.664188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.664336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.664382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.664568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.664616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.664745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.664777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.664918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.664945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.665088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.665114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.665286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.665331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.665522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.665566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.665680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.665709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.665840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.665869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.666017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.666042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.666187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.666235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.666324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.666352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.666503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.666543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.666681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.666709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.666826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.666854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.666969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.666995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.667107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.667132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.667264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.667292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.667445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.667473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.667606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.667634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.667764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.667792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.667940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.667980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.668086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.668124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.668252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.668278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.668400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.668425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.856 [2024-07-14 14:10:03.668548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.856 [2024-07-14 14:10:03.668575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.856 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.668714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.668758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.668903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.668930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.669048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.669073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.669163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.669189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.669321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.669372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.669524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.669552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.669675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.669703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.669835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.669863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.669983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.670008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.670132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.670175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.670307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.670335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.670521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.670554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.670686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.670714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.670813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.670852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.671000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.671026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.671111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.671136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.671275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.671303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.671449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.671477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.671602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.671630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.671754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.671782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.671932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.671971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.672114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.672139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.672258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.672298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.672451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.672479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.672585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.672628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.672774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.672803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.672952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.672977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.673097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.673122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.673201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.673225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.673334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.673363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.673477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.673501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.673655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.673683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.673825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.673868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.674000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.674039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.674175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.674202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.674307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.674336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.674456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.674495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.674607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.674633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.674752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.674779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.857 qpair failed and we were unable to recover it. 00:34:25.857 [2024-07-14 14:10:03.674902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.857 [2024-07-14 14:10:03.674931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.675030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.675056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.675147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.675172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.675292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.675317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.675409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.675434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.675570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.675598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.675757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.675785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.675945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.675984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.676106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.676134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.676236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.676265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.676418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.676471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.676613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.676663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.676777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.676807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.676926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.676952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.677050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.677074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.677157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.677182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.677268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.677293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.677377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.677402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.677494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.677521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.677626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.677654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.677820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.677864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.678017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.678043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.678166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.678213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.678376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.678419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.678511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.678539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.678643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.678669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.678831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.678870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.678990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.679029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.679121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.679164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.679321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.679350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.679468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.679494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.679645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.679674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.679807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.679832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.679913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.679938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.680026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.680051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.680144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.680168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.680317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.680342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.680443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.680480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.680605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.680629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.680774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.858 [2024-07-14 14:10:03.680808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.858 qpair failed and we were unable to recover it. 00:34:25.858 [2024-07-14 14:10:03.680932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.680961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.681085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.681111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.681226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.681253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.681397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.681426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.681551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.681580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.681707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.681736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.681888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.681914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.682003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.682028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.682134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.682159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.682285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.682313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.682449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.682490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.682618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.682646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.682769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.682797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.682933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.682959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.683075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.683100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.683206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.683233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.683334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.683359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.683499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.683526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.683639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.683663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.683816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.683841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.683980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.684020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.684123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.684152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.684247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.684290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.684393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.684422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.684548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.684577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.684742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.684771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.684867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.684908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.685018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.685043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.685125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.685150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.685233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.685258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.685342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.685367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.685470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.685497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.685627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.685657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.685757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.685786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.685917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.685957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.686055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.686082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.686196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.686240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.686380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.686409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.686529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.686558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.686676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.686702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.859 [2024-07-14 14:10:03.686824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.859 [2024-07-14 14:10:03.686849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.859 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.686946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.686972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.687062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.687088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.687180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.687220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.687347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.687375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.687468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.687496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.687596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.687627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.687769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.687808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.687929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.687957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.688051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.688077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.688208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.688236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.688340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.688367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.688462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.688489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.688616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.688649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.688743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.688770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.688858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.688892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.689013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.689043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.689143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.689172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.689334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.689364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.689490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.689519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.689607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.689635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.689741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.689769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.689858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.689894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.690008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.690053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.690215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.690259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.690346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.690372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.690537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.690566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.690677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.690704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.690792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.690817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.690951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.690979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.691074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.691102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.691237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.691264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.691355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.691397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.691517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.691545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.691670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.860 [2024-07-14 14:10:03.691702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.860 qpair failed and we were unable to recover it. 00:34:25.860 [2024-07-14 14:10:03.691803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.691833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.691982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.692010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.692120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.692169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.692304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.692347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.692458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.692502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.692643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.692669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.692762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.692788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.692874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.692910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.693002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.693027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.693138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.693164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.693257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.693281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.693371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.693398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.693488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.693513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.693654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.693680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.693774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.693800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.693893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.693921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.694011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.694036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.694122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.694147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.694239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.694268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.694353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.694378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.694504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.694529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.694616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.694641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.694750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.694775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.694893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.694935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.695023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.695051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.695163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.695206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.695316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.695348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.695503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.695534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.695636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.695661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.695776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.695800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.695892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.695917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.696041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.696085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.696179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.696204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.696348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.696373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.696460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.696486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.696603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.696629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.696718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.696743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.696855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.696886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.696970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.696996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.697088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.697113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.697194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.861 [2024-07-14 14:10:03.697219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.861 qpair failed and we were unable to recover it. 00:34:25.861 [2024-07-14 14:10:03.697331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.697355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.697441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.697466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.697543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.697568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.697659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.697683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.697786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.697830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.697950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.697978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.698109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.698148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.698280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.698307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.698396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.698424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.698518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.698545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.698636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.698662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.698792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.698831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.698967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.698995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.699129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.699171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.699313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.699339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.699489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.699520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.699654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.699683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.699781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.699810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.699980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.700019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.700172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.700201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.700394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.700422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.700517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.700545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.700650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.700678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.700809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.700836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.700996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.701035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.701134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.701160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.701294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.701321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.701475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.701520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.701660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.701687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.701813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.701838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.701933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.701959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.702089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.702121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.702223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.702256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.702351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.702379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.702541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.702590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.702684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.702711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.702813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.702840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.702973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.703000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.703085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.703109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.862 [2024-07-14 14:10:03.703240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.862 [2024-07-14 14:10:03.703265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.862 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.703371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.703400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.703497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.703524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.703641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.703670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.703777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.703802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.703908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.703951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.704088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.704114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.704269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.704295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.704473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.704500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.704621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.704663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.704761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.704788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.704972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.705011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.705137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.705165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.705264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.705289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.705475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.705501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.705643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.705674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.705792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.705818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.705915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.705941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.706032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.706057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.706193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.706222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.706359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.706401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.706535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.706563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.706689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.706717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.706807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.706834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.706964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.706990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.707074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.707099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.707260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.707287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.707380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.707407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.707509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.707534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.707680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.707707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.707833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.707860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.708005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.708030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.708148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.708179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.708276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.708302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.708415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.708443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.708586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.708626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.708801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.708828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.708945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.708970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.709053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.709077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.863 [2024-07-14 14:10:03.709160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.863 [2024-07-14 14:10:03.709202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.863 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.709318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.709342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.709445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.709473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.709602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.709629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.709732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.709760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.709862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.709893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.709985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.710009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.710129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.710154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.710237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.710279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.710419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.710447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.710536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.710564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.710728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.710756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.710853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.710887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.710994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.711019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.711135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.711160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.711314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.711341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.711470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.711498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.711647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.711674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.711765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.711792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.711891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.711934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.712046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.712071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.712169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.712195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.712280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.712305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.712429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.712456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.712544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.712571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.712698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.712741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.712871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.712904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.713006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.713031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.713124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.713150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.713285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.713313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.713403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.713432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.713533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.713561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.713663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.713691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.713803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.713828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.713943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.713982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.714087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.714114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.714218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.714262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.714393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.714422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.714552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.714581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.714709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.714738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.714852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.714893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.864 [2024-07-14 14:10:03.714997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.864 [2024-07-14 14:10:03.715022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.864 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.715108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.715133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.715287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.715320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.715433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.715462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.715635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.715664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.715761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.715790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.715886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.715940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.716033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.716058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.716148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.716173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.716251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.716276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.716387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.716412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.716570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.716597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.716700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.716725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.716861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.716895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.717008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.717033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.717112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.717137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.717265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.717291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.717371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.717396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.717502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.717530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.717621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.717648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.717764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.717806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.717904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.717948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.718036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.718062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.718174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.718201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.718343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.718367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.718490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.718515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.718657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.718684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.718777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.718805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.865 [2024-07-14 14:10:03.718920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.865 [2024-07-14 14:10:03.718945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.865 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.719027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.719052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.719166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.719208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.719331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.719358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.719511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.719538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.719637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.719665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.719762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.719790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.719900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.719957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.720100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.720139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.720254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.720280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.720386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.720414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.720562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.720605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.720727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.720752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.720867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.720899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.720980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.721005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.721097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.721124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.721219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.721244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.721327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.721352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.721470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.721506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.721597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.721624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.721715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.721740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.721823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.721849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.721947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.721972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.722061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.722087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.722180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.722206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.722320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.722346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.722435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.722461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.722577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.722603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.722693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.722719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.722846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.722872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.722971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.722996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.723093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.723119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.723239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.723271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.723388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.723414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.723496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.723521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.723632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.723658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.723741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.866 [2024-07-14 14:10:03.723766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.866 qpair failed and we were unable to recover it. 00:34:25.866 [2024-07-14 14:10:03.723894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.723920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.724012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.724038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.724124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.724149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.724237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.724262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.724381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.724407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.724532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.724558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.724653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.724679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.724769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.724795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.724922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.724948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.725039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.725064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.725190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.725215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.725329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.725354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.725453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.725478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.725596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.725621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.725734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.725759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.725881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.725906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.725997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.726022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.726117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.726142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.726260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.726286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.726370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.726395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.726480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.726505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.726589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.726614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.726733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.726760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.726875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.726906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.726989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.727014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.727103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.727129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.727228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.727254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.727344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.727370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.727488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.727512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.727639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.727665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.727751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.727777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.727896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.727934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.728027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.728052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.728151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.728178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.728296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.728321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.728410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.728440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.728559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.728586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.728671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.728696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.728809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.728849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.728965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.729004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.729136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.729164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.729261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.729287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.729410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.729437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.867 [2024-07-14 14:10:03.729528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.867 [2024-07-14 14:10:03.729554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.867 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.729699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.729725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.729821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.729847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.729943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.729970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.730062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.730088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.730223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.730252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.730352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.730382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.730578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.730608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.730760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.730789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.730929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.730956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.731063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.731090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.731238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.731267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.731394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.731423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.731554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.731582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.731680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.731708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.731820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.731846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.731941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.731968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.732072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.732098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.732258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.732287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.732385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.732414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.732577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.732605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.732695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.732724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.732822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.732859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.732978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.733004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.733119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.733146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.733312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.733341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.733432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.733461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.733562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.733587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.733746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.733772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.733893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.733920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.734042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.734067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.734159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.734185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.734277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.734326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.734479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.734508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.734664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.734692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.734825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.734851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.734946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.734972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.735117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.735144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.735283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.735312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.735423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.735448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.735555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.735584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.735688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.735717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.735839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.735868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.735990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.736015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.736137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.736163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.736251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.736277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.736385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.736414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.736537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.868 [2024-07-14 14:10:03.736568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.868 qpair failed and we were unable to recover it. 00:34:25.868 [2024-07-14 14:10:03.736669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.736697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.736811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.736840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.736955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.736982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.737060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.737086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.737175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.737200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.737338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.737367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.737466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.737495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.737606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.737632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.737768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.737796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.737932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.737958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.738074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.738100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.738201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.738240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.738378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.738422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.738564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.738608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.738730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.738757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.738901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.738927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.739035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.739078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.739209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.739252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.739356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.739383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.739480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.739506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.739593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.739618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.739735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.739761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.739905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.739931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.740045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.740071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.740155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.740185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.740305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.740331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.740421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.740446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.740560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.740586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.740700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.740724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.740817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.740841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.740967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.740993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.741075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.741099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.741217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.741242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.741349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.741374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.741484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.741508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.741630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.741658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.741774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.741800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.741886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.741913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.742010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.742037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.742154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.742180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.869 qpair failed and we were unable to recover it. 00:34:25.869 [2024-07-14 14:10:03.742323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.869 [2024-07-14 14:10:03.742349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.742475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.742502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.742618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.742644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.742754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.742781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.742923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.742950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.743037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.743063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.743196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.743224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.743370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.743399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.743523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.743552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.743672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.743700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.743825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.743853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.744010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.744049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.744166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.744210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.744354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.744398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.744537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.744581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.744677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.744703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.744823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.744848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.744951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.744978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.745069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.745094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.745212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.745238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.745346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.745371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.745459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.745485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.745574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.745598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.745682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.745711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.745797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.745823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.745950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.745977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.746099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.746125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.746211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.746238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.746356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.746382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.746500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.746526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.746614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.746641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.746733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.746760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.746898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.746924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.747026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.747054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.747171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.747198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.747400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.747451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.747565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.747590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.747719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.747745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.747869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.747901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.748011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.748040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.748136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.748161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.748245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.748270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.748363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.748388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.748500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.748524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.748613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.748638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.748782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.748807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.748919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.748946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.749039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.870 [2024-07-14 14:10:03.749064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.870 qpair failed and we were unable to recover it. 00:34:25.870 [2024-07-14 14:10:03.749157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.749184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.749274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.749300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.749443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.749469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.749566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.749596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.749705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.749730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.749812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.749836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.749965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.749990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.750104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.750129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.750271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.750297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.750384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.750411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.750497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.750522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.750636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.750661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.750812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.750837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.750964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.751008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.751123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.751151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.751332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.751359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.751519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.751545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.751666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.751691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.751782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.751807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.751901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.751926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.752014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.752040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.752158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.752183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.752299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.752324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.752435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.752460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.752572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.752597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.752682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.752706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.752795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.752820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.752935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.752961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.753054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.753080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.753164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.753189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.753309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.753335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.753443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.753467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.753612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.753636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.753722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.753747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.753860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.753890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.754011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.754037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.754131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.754156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.754273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.754299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.754386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.754412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.754511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.754536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.754649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.754674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.754761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.754788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.754917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.754943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.755033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.755061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.755155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.755179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.755295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.755321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.755436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.755461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.871 [2024-07-14 14:10:03.755576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.871 [2024-07-14 14:10:03.755602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.871 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.755743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.755769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.755887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.755913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.756003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.756029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.756117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.756142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.756260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.756286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.756427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.756453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.756544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.756569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.756692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.756718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.756821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.756851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.757002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.757028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.757161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.757204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.757333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.757358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.757479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.757504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.757599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.757625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.757709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.757735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.757851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.757883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.757995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.758038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.758127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.758152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.758242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.758268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.758378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.758404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.758516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.758542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.758626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.758652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.758747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.758773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.758867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.758910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.759004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.759031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.759145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.759171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.759289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.759314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.759430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.759456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.759599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.759625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.759738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.759764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.759855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.759890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.760054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.760099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.760199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.760227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.760383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.760409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.760528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.760554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.760647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.760677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.760791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.760817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.760935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.760961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.761044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.761070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.761162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.761189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.761334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.761359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.761478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.761504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.761646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.761672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.761754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.761779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.761891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.761918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.762009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.762034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.762144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.872 [2024-07-14 14:10:03.762170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.872 qpair failed and we were unable to recover it. 00:34:25.872 [2024-07-14 14:10:03.762290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.762316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.762401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.762426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.762541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.762567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.762680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.762705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.762795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.762821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.762957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.762983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.763141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.763185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.763305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.763330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.763474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.763500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.763614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.763639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.763755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.763780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.763917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.763946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.764080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.764106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.764254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.764279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.764390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.764416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.764504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.764529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.764612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.764637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.764753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.764778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.764869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.764901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.765020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.765047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.765139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.765164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.765280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.765305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.765421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.765447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.765531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.765557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.765643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.765669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.765778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.765803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.765921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.765947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.766031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.766057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.766151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.766183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.766267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.766293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.766435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.766461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.766554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.766580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.766693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.766719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.766841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.766867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.766998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.767024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.767140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.767166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.767297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.767326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.767428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.767453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.767568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.767593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.767686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.767712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.767829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.767855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.873 [2024-07-14 14:10:03.767983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.873 [2024-07-14 14:10:03.768021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.873 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.768126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.768153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.768247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.768273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.768392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.768417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.768538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.768563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.768651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.768676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.768761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.768785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.768860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.768892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.768988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.769015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.769130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.769156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.769266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.769291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.769387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.769413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.769504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.769531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.769671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.769697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.769842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.769868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.769972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.769998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.770120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.770146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.770274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.770319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.770451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.770492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.770630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.770655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.770767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.770793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.770902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.770929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.771071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.771097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.771212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.771237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.771349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.771375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.771471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.771497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.771617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.771642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.771755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.771785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.771942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.771969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.772086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.772110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.772227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.772252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.772363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.772387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.772478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.772503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.772585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.772610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.772739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.772763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.772888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.772913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.773028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.773056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.773205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.773233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.773352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.773379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.773501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.773529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.773658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.773688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.773855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.773885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.773978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.774005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.774108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.774152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.774272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.774298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.774419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.774444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.774584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.774610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.774730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.874 [2024-07-14 14:10:03.774756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:25.874 qpair failed and we were unable to recover it. 00:34:25.874 [2024-07-14 14:10:03.774883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.875 [2024-07-14 14:10:03.774909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.875 qpair failed and we were unable to recover it. 00:34:25.875 [2024-07-14 14:10:03.775031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.875 [2024-07-14 14:10:03.775059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.875 qpair failed and we were unable to recover it. 00:34:25.875 [2024-07-14 14:10:03.775160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.875 [2024-07-14 14:10:03.775189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.875 qpair failed and we were unable to recover it. 00:34:25.875 [2024-07-14 14:10:03.775321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.875 [2024-07-14 14:10:03.775349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.875 qpair failed and we were unable to recover it. 00:34:25.875 [2024-07-14 14:10:03.775474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.875 [2024-07-14 14:10:03.775502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.875 qpair failed and we were unable to recover it. 00:34:25.875 [2024-07-14 14:10:03.775635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.875 [2024-07-14 14:10:03.775660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.875 qpair failed and we were unable to recover it. 00:34:25.875 [2024-07-14 14:10:03.775746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.875 [2024-07-14 14:10:03.775775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:25.875 qpair failed and we were unable to recover it. 00:34:26.152 [2024-07-14 14:10:03.775891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.152 [2024-07-14 14:10:03.775917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.152 qpair failed and we were unable to recover it. 00:34:26.152 [2024-07-14 14:10:03.776028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.152 [2024-07-14 14:10:03.776056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.152 qpair failed and we were unable to recover it. 00:34:26.152 [2024-07-14 14:10:03.776181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.152 [2024-07-14 14:10:03.776209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.152 qpair failed and we were unable to recover it. 00:34:26.152 [2024-07-14 14:10:03.776343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.776371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.776504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.776531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.776687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.776718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.776853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.776906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.777030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.777056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.777165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.777195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.777301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.777326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.777439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.777464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.777553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.777580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.777724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.777750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.777882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.777909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.778030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.778055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.778145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.778170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.778257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.778282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.778437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.778465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.778559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.778587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.778714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.778741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.778867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.778902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.779053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.779081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.779200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.779228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.779342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.779369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.779462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.779490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.779601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.779628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.779740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.779770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.779890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.779917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.780048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.780077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.780225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.780269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.780379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.780406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.780523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.780550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.780635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.780661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.780776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.780803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.780920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.780947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.781061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.781087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.781179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.781204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.781319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.781345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.781459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.781485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.781597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.781622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.781745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.781771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.781861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.781894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.782013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.782038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.153 qpair failed and we were unable to recover it. 00:34:26.153 [2024-07-14 14:10:03.782124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.153 [2024-07-14 14:10:03.782149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.782293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.782318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.782427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.782452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.782560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.782585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.782660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.782685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.782799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.782824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.782904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.782930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.783009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.783034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.783146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.783171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.783297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.783322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.783420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.783452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.783580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.783607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.783731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.783772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.783863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.783896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.784021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.784046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.784155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.784195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.784296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.784323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.784417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.784446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.784541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.784569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.784661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.784689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.784775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.784803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.784904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.784947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.785076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.785101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.785228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.785270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.785411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.785440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.785611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.785636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.785753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.785778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.785862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.785893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.786011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.786036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.786116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.786157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.786265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.786305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.786460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.786487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.786575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.786603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.786745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.786770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.786858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.786889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.786988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.787013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.787101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.787126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.787269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.787301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.787411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.787451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.787595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.787636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.787762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.154 [2024-07-14 14:10:03.787789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.154 qpair failed and we were unable to recover it. 00:34:26.154 [2024-07-14 14:10:03.787875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.787908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.788044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.788069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.788199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.788226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.788323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.788351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.788507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.788534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.788657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.788685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.788825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.788850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.788990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.789015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.789130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.789170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.789305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.789330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.789414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.789438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.789575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.789603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.789725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.789752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.789893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.789919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.790047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.790071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.790213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.790241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.790369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.790397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.790521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.790546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.790672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.790697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.790836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.790863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.791011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.791035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.791149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.791175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.791285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.791310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.791468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.791496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.791619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.791646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.791778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.791803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.791894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.791920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.792029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.792056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.792181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.792209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.792313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.792338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.792420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.792446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.792583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.792611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.792707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.792734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.792836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.792861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.792956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.792981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.793091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.793119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.793269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.793296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.793435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.793460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.793552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.793576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.793715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.793742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.793873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.155 [2024-07-14 14:10:03.793904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.155 qpair failed and we were unable to recover it. 00:34:26.155 [2024-07-14 14:10:03.794014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.794038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.794127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.794152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.794282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.794310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.794436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.794464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.794590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.794615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.794725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.794750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.794890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.794933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.795029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.795054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.795166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.795190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.795270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.795295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.795476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.795502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.795581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.795605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.795725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.795750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.795863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.795893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.796033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.796061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.796196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.796221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.796332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.796356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.796472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.796496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.796616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.796644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.796771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.796799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.796959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.796985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.797064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.797088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.797170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.797195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.797281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.797310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.797467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.797492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.797607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.797631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.797741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.797784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.797894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.797919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.798058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.798082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.798165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.798190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.798316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.798343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.798466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.798493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.798631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.798656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.798794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.798818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.798910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.798953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.799041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.799069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.799176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.799201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.799338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.799363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.799505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.799533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.799659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.799686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.799792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.156 [2024-07-14 14:10:03.799816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.156 qpair failed and we were unable to recover it. 00:34:26.156 [2024-07-14 14:10:03.799930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.799955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.800048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.800072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.800202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.800230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.800392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.800417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.800499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.800524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.800660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.800688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.800819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.800844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.800967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.800992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.801106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.801131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.801264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.801296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.801415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.801443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.801554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.801578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.801693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.801718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.801857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.801890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.801993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.802017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.802128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.802153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.802274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.802299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.802390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.802415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.802505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.802531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.802682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.802707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.802827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.802851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.802978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.803007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.803126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.803154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.803274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.803299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.803376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.803401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.803556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.803584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.803675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.803702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.803829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.803854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.804007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.804032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.804146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.804189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.804316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.804344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.804469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.804494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.157 [2024-07-14 14:10:03.804621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.157 [2024-07-14 14:10:03.804646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.157 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.804808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.804835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.804978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.805007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.805146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.805170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.805279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.805307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.805425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.805452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.805580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.805608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.805739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.805764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.805841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.805866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.806007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.806035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.806152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.806179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.806292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.806317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.806457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.806482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.806608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.806636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.806733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.806761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.806883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.806909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.807023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.807048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.807161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.807188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.807325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.807353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.807491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.807515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.807634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.807658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.807773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.807801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.807954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.807982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.808113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.808138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.808275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.808300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.808410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.808437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.808573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.808600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.808702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.808729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.808838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.808863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.808985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.809010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.809092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.809118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.809234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.809259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.809392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.809433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.809548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.809576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.809707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.809735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.809869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.809899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.810011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.810036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.810200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.810228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.810377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.810405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.810562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.810587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.810707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.158 [2024-07-14 14:10:03.810732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.158 qpair failed and we were unable to recover it. 00:34:26.158 [2024-07-14 14:10:03.810881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.810907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.811021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.811049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.811193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.811218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.811302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.811328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.811477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.811505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.811604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.811632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.811764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.811788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.811904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.811929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.812094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.812121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.812222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.812249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.812394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.812419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.812539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.812564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.812682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.812707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.812793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.812835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.813001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.813027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.813149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.813174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.813312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.813337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.813493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.813521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.813636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.813661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.813774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.813798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.813938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.813967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.814078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.814106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.814208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.814232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.814315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.814340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.814459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.814500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.814600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.814628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.814768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.814793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.814883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.814908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.815036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.815063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.815194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.815222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.815361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.815387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.815469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.815500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.815631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.815659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.815810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.815837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.815985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.816010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.816116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.816141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.816278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.816305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.816434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.816476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.816586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.816611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.816723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.816747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.816934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.159 [2024-07-14 14:10:03.816959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.159 qpair failed and we were unable to recover it. 00:34:26.159 [2024-07-14 14:10:03.817053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.817077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.817201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.817226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.817309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.817334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.817439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.817466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.817562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.817590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.817738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.817765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.817891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.817933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.818061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.818086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.818213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.818240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.818378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.818404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.818523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.818548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.818725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.818750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.818859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.818889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.819008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.819033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.819115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.819140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.819272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.819300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.819439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.819464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.819574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.819603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.819719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.819745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.819851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.819884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.819988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.820017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.820132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.820156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.820240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.820265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.820373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.820414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.820542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.820570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.820700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.820725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.820846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.820871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.821030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.821058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.821155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.821183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.821314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.821339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.821416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.821441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.821575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.821603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.821696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.821724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.821825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.821850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.821934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.821960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.822074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.822098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.822269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.822294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.822410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.822435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.822571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.822596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.822712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.822740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.822865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.160 [2024-07-14 14:10:03.822900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.160 qpair failed and we were unable to recover it. 00:34:26.160 [2024-07-14 14:10:03.823031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.823055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.823174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.823198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.823393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.823418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.823505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.823530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.823653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.823678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.823760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.823785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.823888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.823916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.824023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.824048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.824166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.824191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.824280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.824305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.824408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.824436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.824565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.824593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.824728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.824753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.824867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.824898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.825040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.825068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.825188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.825216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.825357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.825381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.825498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.825523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.825660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.825688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.825788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.825815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.825972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.825997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.826081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.826106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.826215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.826243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.826402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.826428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.826539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.826564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.826654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.826678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.826781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.826808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.826968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.826997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.827127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.827152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.827295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.827320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.827466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.827494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.827622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.827651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.827809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.827834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.827920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.827946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.828033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.828057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.161 qpair failed and we were unable to recover it. 00:34:26.161 [2024-07-14 14:10:03.828145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.161 [2024-07-14 14:10:03.828187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.828320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.828345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.828436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.828461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.828591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.828620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.828785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.828809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.828931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.828956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.829072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.829097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.829244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.829272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.829424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.829452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.829626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.829655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.829771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.829815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.829940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.829969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.830103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.830128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.830220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.830245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.830357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.830382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.830492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.830520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.830615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.830643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.830745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.830770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.830931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.830956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.831048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.831073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.831160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.831185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.831324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.831349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.831425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.831450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.831589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.831617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.831717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.831744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.831856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.831887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.831979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.832004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.832129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.832157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.832301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.832327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.832414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.832439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.832519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.832544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.832645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.832673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.832814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.832839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.832995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.833021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.833139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.833183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.833277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.833305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.833424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.833456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.833572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.833597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.833691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.833716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.833812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.833853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.162 [2024-07-14 14:10:03.833993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.162 [2024-07-14 14:10:03.834021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.162 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.834170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.834195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.834303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.834327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.834461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.834489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.834581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.834608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.834738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.834763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.834854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.834885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.835019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.835047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.835203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.835231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.835361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.835386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.835506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.835531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.835666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.835693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.835816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.835844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.836000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.836026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.836105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.836130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.836258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.836286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.836436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.836463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.836571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.836596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.836676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.836701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.836792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.836836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.836960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.836988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.837106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.837131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.837257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.837282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.837433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.837466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.837581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.837609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.837742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.837784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.837889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.837929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.838048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.838073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.838188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.838213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.838303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.838328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.838410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.838435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.838569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.838597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.838691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.838719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.838846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.838871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.838991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.839016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.839118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.839146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.839289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.839314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.839431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.839457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.839599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.839640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.839769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.839794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.839986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.840011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.163 [2024-07-14 14:10:03.840157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.163 [2024-07-14 14:10:03.840182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.163 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.840325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.840349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.840456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.840483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.840577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.840605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.840737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.840761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.840846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.840871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.841016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.841044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.841169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.841196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.841329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.841354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.841430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.841454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.841579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.841622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.841751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.841779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.841922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.841948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.842147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.842174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.842326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.842353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.842453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.842480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.842591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.842616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.842761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.842786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.842931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.842959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.843082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.843110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.843218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.843243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.843331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.843356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.843485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.843512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.843603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.843631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.843732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.843757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.843874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.843906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.844038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.844065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.844262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.844290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.844402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.844428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.844540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.844565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.844692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.844719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.844893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.844918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.845025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.845050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.845252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.845280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.845363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.845391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.845539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.845566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.845667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.845694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.845827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.845854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.845968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.845994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.846079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.846105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.846248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.846273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.846463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.164 [2024-07-14 14:10:03.846490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.164 qpair failed and we were unable to recover it. 00:34:26.164 [2024-07-14 14:10:03.846650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.846677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.846811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.846839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.846952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.846978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.847071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.847096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.847189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.847214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.847328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.847353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.847429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.847454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.847561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.847586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.847716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.847749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.847838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.847866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.848004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.848030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.848219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.848244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.848388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.848416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.848544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.848569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.848675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.848699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.848785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.848810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.848932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.848971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.849075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.849103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.849209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.849234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.849423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.849464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.849581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.849609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.849726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.849753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.849896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.849921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.850033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.850059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.850215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.850243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.850363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.850390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.850476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.850518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.850664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.850689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.850802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.850829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.850947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.850975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.851080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.851105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.851296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.851338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.851466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.851494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.165 [2024-07-14 14:10:03.851621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.165 [2024-07-14 14:10:03.851649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.165 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.851773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.851798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.851922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.851951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.852078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.852105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.852257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.852285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.852415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.852440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.852522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.852547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.852685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.852713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.852839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.852866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.852976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.853001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.853110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.853135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.853245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.853273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.853363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.853392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.853498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.853524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.853613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.853639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.853803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.853831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.853961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.853987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.854130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.854155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.854270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.854311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.854467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.854495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.854610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.854638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.854782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.854807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.854920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.854945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.855076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.855104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.855204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.855232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.855393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.855418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.855533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.855558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.855696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.855724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.855817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.855844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.855989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.856015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.856111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.856136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.856274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.856302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.856451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.856476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.856590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.856615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.856704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.856729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.856889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.856917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.857015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.857043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.857174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.857199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.857307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.857332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.857444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.857471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.857590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.166 [2024-07-14 14:10:03.857618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.166 qpair failed and we were unable to recover it. 00:34:26.166 [2024-07-14 14:10:03.857750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.857775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.857871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.857902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.857992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.858017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.858125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.858153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.858258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.858283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.858397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.858422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.858553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.858581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.858736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.858764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.858897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.858922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.859008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.859033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.859135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.859163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.859279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.859307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.859446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.859471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.859584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.859608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.859752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.859777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.859912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.859940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.860082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.860107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.860214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.860239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.860376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.860404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.860491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.860519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.860671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.860695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.860782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.860806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.860926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.860955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.861072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.861100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.861267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.861292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.861384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.861409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.861546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.861574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.861694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.861721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.861844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.861871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.861985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.862015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.862098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.862122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.862274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.862301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.862473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.862498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.862584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.862609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.862775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.862803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.862936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.862964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.863103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.863127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.863233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.863257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.863370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.863398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.863515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.863543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.863653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.863677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.167 qpair failed and we were unable to recover it. 00:34:26.167 [2024-07-14 14:10:03.863790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.167 [2024-07-14 14:10:03.863815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.863976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.864004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.864133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.864161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.864261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.864286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.864403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.864429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.864594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.864622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.864736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.864764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.864882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.864907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.864997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.865022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.865152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.865179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.865297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.865324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.865478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.865502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.865608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.865649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.865742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.865769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.865970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.865998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.866135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.866163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.866303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.866344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.866465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.866493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.866658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.866683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.866828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.866853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.866943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.866968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.867060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.867085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.867181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.867208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.867371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.867395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.867506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.867531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.867635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.867662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.867751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.867780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.867905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.867930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.868022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.868047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.868163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.868191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.868341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.868369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.868497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.868522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.868664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.868689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.868801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.868828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.868919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.868948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.869090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.869114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.869309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.869337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.869432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.869460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.869562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.869590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.869719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.869762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.168 [2024-07-14 14:10:03.869864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.168 [2024-07-14 14:10:03.869898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.168 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.870029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.870054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.870189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.870221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.870349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.870373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.870495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.870519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.870656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.870685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.870776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.870803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.870916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.870941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.871020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.871045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.871146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.871171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.871289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.871313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.871425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.871450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.871563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.871588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.871799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.871826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.871954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.871982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.872120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.872145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.872236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.872261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.872409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.872434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.872569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.872597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.872728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.872753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.872869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.872899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.873033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.873060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.873159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.873187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.873319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.873344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.873460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.873485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.873592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.873620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.873746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.873774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.873911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.873937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.874021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.874045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.874208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.874236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.874362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.874390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.874515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.874541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.874644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.874669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.874811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.874839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.874939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.874967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.875098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.875123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.875237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.875263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.875368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.875396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.875518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.875546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.875742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.875767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.875920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.875949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.169 [2024-07-14 14:10:03.876101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.169 [2024-07-14 14:10:03.876129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.169 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.876250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.876278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.876394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.876419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.876537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.876562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.876647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.876672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.876865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.876899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.877031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.877056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.877175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.877200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.877310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.877335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.877465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.877493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.877647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.877671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.877787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.877812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.877985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.878011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.878127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.878152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.878266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.878293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.878398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.878423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.878537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.878564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.878663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.878690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.878816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.878844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.879044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.879070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.879225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.879252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.879402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.879430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.879542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.879567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.879653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.879678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.879763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.879788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.879903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.879946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.880060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.880085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.880202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.880227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.880361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.880389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.880517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.880549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.880663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.880688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.880828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.880852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.881014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.881043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.881132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.881160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.881272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.881298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.881406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.881431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.170 [2024-07-14 14:10:03.881564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.170 [2024-07-14 14:10:03.881591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.170 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.881743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.881771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.881906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.881932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.882017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.882041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.882175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.882203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.882400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.882427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.882561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.882586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.882704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.882729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.882889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.882917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.883023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.883051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.883157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.883182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.883295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.883320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.883444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.883472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.883593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.883621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.883728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.883753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.883843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.883868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.883969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.884012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.884112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.884139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.884281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.884306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.884394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.884419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.884504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.884532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.884676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.884704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.884840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.884888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.885047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.885072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.885161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.885205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.885295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.885325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.885467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.885492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.885618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.885642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.885752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.885779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.885905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.885933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.886076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.886101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.886206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.886230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.886341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.886369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.886492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.886520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.886660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.886685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.886766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.886791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.886930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.886958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.887111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.887138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.887270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.887295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.887403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.887428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.887564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.887592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.171 [2024-07-14 14:10:03.887685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.171 [2024-07-14 14:10:03.887712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.171 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.887850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.887879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.888006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.888031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.888116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.888141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.888245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.888273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.888400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.888425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.888537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.888562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.888698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.888725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.888822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.888849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.889022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.889048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.889158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.889182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.889355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.889379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.889466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.889493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.889603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.889628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.889751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.889776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.889891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.889919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.890056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.890081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.890165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.890190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.890330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.890355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.890512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.890539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.890644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.890672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.890836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.890860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.890950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.890975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.891069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.891097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.891193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.891221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.891361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.891386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.891525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.891565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.891672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.891699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.891830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.891857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.891997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.892021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.892172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.892197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.892299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.892326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.892453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.892480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.892636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.892661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.892777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.892819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.892954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.892980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.893089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.893114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.893228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.893254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.893364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.893389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.893520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.893548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.893638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.893665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.172 [2024-07-14 14:10:03.893778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.172 [2024-07-14 14:10:03.893803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.172 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.893922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.893948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.894078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.894106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.894241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.894266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.894386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.894411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.894503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.894527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.894654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.894686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.894812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.894840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.894978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.895003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.895110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.895135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.895301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.895328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.895412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.895440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.895595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.895619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.895728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.895752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.895869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.895900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.896008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.896036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.896142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.896167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.896285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.896309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.896402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.896427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.896507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.896531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.896666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.896691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.896834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.896859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.896995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.897024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.897149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.897177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.897284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.897309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.897421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.897445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.897519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.897544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.897644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.897684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.897794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.897819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.897931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.897956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.898080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.898108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.898271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.898299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.898431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.898455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.898563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.898592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.898721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.898749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.898880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.898909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.899015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.899040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.899115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.899140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.899273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.899300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.899429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.899456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.899588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.899613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.173 [2024-07-14 14:10:03.899703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.173 [2024-07-14 14:10:03.899746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.173 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.899835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.899862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.899977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.900001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.900081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.900106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.900189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.900214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.900365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.900389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.900495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.900522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.900683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.900708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.900817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.900842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.900959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.900987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.901093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.901121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.901252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.901277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.901385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.901410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.901507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.901535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.901731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.901759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.901926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.901952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.902040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.902065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.902198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.902225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.902353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.902381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.902512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.902541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.902661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.902686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.902828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.902855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.902967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.902994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.903131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.903156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.903269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.903293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.903432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.903460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.903557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.903584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.903689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.903713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.903830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.903855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.904026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.904054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.904178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.904206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.904367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.904392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.904510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.904535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.904656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.904681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.904821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.904848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.904973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.904999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.174 qpair failed and we were unable to recover it. 00:34:26.174 [2024-07-14 14:10:03.905114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.174 [2024-07-14 14:10:03.905138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.905261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.905289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.905381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.905408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.905511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.905535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.905619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.905644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.905780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.905808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.905960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.905989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.906131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.906156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.906244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.906268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.906356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.906381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.906544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.906571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.906715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.906739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.906848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.906873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.907017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.907042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.907177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.907205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.907340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.907368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.907484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.907509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.907625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.907653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.907775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.907802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.907904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.907929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.908049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.908074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.908218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.908243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.908351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.908378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.908486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.908510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.908625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.908650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.908784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.908813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.908933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.908958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.909035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.909059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.909169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.909194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.909316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.909344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.909486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.909513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.909648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.909673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.909794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.909819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.909911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.909937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.910065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.910093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.910264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.910289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.910404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.910447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.910544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.910572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.910679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.910706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.910853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.910883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.911004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.911029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.175 qpair failed and we were unable to recover it. 00:34:26.175 [2024-07-14 14:10:03.911116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.175 [2024-07-14 14:10:03.911140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.911246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.911271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.911392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.911417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.911511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.911535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.911652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.911677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.911789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.911814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.911934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.911961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.912042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.912067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.912175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.912203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.912307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.912335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.912438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.912467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.912558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.912582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.912663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.912686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.912771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.912795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.912903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.912928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.913020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.913044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.913174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.913201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.913323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.913349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.913511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.913535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.913654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.913678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.913812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.913839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.913934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.913961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.914071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.914096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.914211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.914236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.914435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.914462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.914568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.914595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.914747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.914774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.914872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.914906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.915010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.915035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.915127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.915152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.915272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.915297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.915381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.915405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.915493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.915517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.915729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.915758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.915873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.915904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.916020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.916045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.916185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.916213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.916307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.916340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.916451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.916476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.916589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.916614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.916726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.916754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.176 [2024-07-14 14:10:03.916849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.176 [2024-07-14 14:10:03.916883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.176 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.917082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.917107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.917268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.917296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.917419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.917446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.917605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.917633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.917741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.917766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.917853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.917883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.917969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.918011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.918098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.918126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.918234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.918259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.918355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.918380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.918487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.918514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.918615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.918643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.918783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.918808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.918895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.918920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.919029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.919057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.919176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.919203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.919311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.919336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.919429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.919454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.919616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.919643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.919756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.919783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.919900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.919925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.920008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.920032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.920118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.920142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.920265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.920293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.920400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.920424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.920534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.920559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.920687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.920715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.920843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.920871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.921032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.921057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.921165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.921190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.921276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.921301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.921430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.921458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.921567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.921592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.921675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.921700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.921789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.921832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.921964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.921992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.922137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.922162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.922244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.922269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.922410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.922438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.922524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.922552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.177 [2024-07-14 14:10:03.922660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.177 [2024-07-14 14:10:03.922685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.177 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.922765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.922790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.922921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.922949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.923047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.923074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.923181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.923205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.923292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.923317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.923401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.923444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.923640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.923668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.923807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.923831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.923916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.923941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.924047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.924075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.924172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.924200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.924328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.924352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.924468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.924493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.924581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.924606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.924736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.924764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.924894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.924919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.925042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.925066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.925181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.925209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.925334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.925362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.925504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.925529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.925610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.925635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.925763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.925804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.925914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.925944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.926057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.926082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.926159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.926184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.926274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.926302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.926419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.926447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.926545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.926569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.926661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.926686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.926778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.926818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.926946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.926974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.927093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.927117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.927200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.927225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.927309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.927351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.927503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.927531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.178 [2024-07-14 14:10:03.927641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.178 [2024-07-14 14:10:03.927665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.178 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.927757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.927782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.927890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.927934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.928027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.928055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.928221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.928245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.928328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.928355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.928567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.928594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.928748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.928773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.928894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.928919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.929059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.929084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.929211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.929239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.929383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.929407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.929523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.929549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.929668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.929693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.929821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.929850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.929983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.930008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.930126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.930151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.930241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.930266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.930401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.930429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.930556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.930583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.930689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.930713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.930827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.930852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.930899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cee390 (9): Bad file descriptor 00:34:26.179 [2024-07-14 14:10:03.931067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.931111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.931263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.931290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.931407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.931433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.931571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.931600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.931757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.931786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.931938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.931971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.932070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.932096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.932189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.932214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.932331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.932356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.932454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.932482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.932624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.932648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.932729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.932755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.932861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.932924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.933041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.933067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.933152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.933177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.933307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.933334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.933440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.933465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.933545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.933569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.933705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.179 [2024-07-14 14:10:03.933737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.179 qpair failed and we were unable to recover it. 00:34:26.179 [2024-07-14 14:10:03.933856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.933899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.933997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.934023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.934181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.934210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.934319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.934345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.934432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.934458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.934596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.934624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.934788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.934813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.934908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.934932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.935087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.935112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.935234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.935259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.935346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.935370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.935477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.935508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.935642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.935667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.935782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.935812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.935972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.936002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.936115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.936141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.936278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.936317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.936469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.936495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.936582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.936607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.936689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.936714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.936797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.936822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.936951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.936976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.937146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.937177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.937275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.937304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.937435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.937462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.937556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.937582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.937699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.937725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.937813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.937840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.937940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.937967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.938078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.938106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.938251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.938275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.938362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.938387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.938497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.938525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.938626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.938650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.938740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.938764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.938855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.938904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.939035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.939060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.939174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.939199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.939309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.939337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.939438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.939463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.939573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.180 [2024-07-14 14:10:03.939602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.180 qpair failed and we were unable to recover it. 00:34:26.180 [2024-07-14 14:10:03.939706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.939733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.939843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.939868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.939995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.940024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.940151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.940180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.940290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.940316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.940411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.940437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.940560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.940587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.940691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.940720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.940819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.940847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.940958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.940985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.941102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.941128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.941244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.941269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.941384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.941410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.941501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.941528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.941665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.941704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.941855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.941906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.942020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.942046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.942165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.942191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.942287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.942315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.942453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.942477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.942574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.942600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.942701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.942729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.942869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.942900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.942984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.943010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.943115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.943143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.943246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.943271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.943353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.943383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.943510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.943538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.943648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.943673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.943774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.943799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.943950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.944006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.944129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.944157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.944260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.944287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.944418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.944450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.944585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.944610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.944725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.944752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.944905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.944934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.945052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.945078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.945166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.945190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.945280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.945305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.945434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.181 [2024-07-14 14:10:03.945459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.181 qpair failed and we were unable to recover it. 00:34:26.181 [2024-07-14 14:10:03.945542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.945567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.945706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.945735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.945851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.945893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.945989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.946015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.946105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.946131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.946254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.946278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.946369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.946394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.946511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.946536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.946652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.946677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.946762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.946787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.946920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.946948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.947090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.947115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.947231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.947260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.947368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.947396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.947511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.947536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.947625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.947650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.947734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.947758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.947850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.947880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.947971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.947996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.948103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.948131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.948244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.948269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.948352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.948377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.948468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.948493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.948607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.948632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.948719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.948744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.948851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.948895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.949037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.949062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.949148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.949172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.949276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.949303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.949409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.949434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.949533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.949572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.949674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.949704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.949839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.949865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.949981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.950008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.950139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.950168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.950281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.950306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.950424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.950451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.950585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.950613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.950784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.950812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.950915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.950961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.951045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.182 [2024-07-14 14:10:03.951070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.182 qpair failed and we were unable to recover it. 00:34:26.182 [2024-07-14 14:10:03.951223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.951248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.951330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.951371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.951492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.951520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.951659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.951684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.951805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.951830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.951951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.951983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.952099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.952124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.952219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.952244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.952339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.952366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.952485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.952510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.952624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.952649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.952818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.952846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.953002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.953028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.953135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.953175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.953324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.953355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.953465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.953491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.953608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.953634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.953739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.953768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.953889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.953916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.954037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.954063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.954283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.954310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.954441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.954466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.954556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.954581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.954712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.954741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.954938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.954964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.955099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.955135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.955243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.955272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.955378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.955404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.955490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.955517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.955662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.955688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.955776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.955802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.955904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.955930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.956037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.956065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.956192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.956217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.183 [2024-07-14 14:10:03.956328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.183 [2024-07-14 14:10:03.956353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.183 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.956463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.956490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.956654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.956679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.956762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.956786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.956893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.956921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.957041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.957066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.957155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.957180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.957268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.957293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.957470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.957495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.957612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.957653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.957753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.957781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.957895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.957922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.958015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.958043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.958139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.958165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.958280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.958306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.958394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.958420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.958513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.958538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.958692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.958718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.958832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.958862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.958973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.959001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.959141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.959165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.959258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.959282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.959366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.959391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.959509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.959533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.959627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.959652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.959786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.959814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.959918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.959943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.960057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.960082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.960224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.960251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.960385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.960410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.960499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.960524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.960629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.960657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.960785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.960810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.960932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.960958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.961094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.961121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.961232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.961257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.961372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.961397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.961507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.961535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.961649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.961673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.961762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.961786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.961905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.961934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.184 [2024-07-14 14:10:03.962050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.184 [2024-07-14 14:10:03.962075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.184 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.962166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.962191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.962280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.962305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.962457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.962482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.962573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.962602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.962687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.962712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.962843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.962871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.963018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.963042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.963119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.963144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.963240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.963265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.963354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.963379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.963498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.963525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.963641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.963665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.963751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.963776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.963888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.963931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.964079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.964104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.964218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.964243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.964399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.964424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.964543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.964568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.964672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.964712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.964862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.964899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.965009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.965036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.965125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.965151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.965243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.965285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.965428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.965453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.965542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.965569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.965773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.965801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.965952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.965978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.966068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.966093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.966181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.966205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.966287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.966311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.966393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.966421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.966558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.966587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.966705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.966730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.966816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.966844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.966955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.966982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.967074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.967100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.967244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.967270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.967391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.967421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.967536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.967563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.967655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.967681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.185 qpair failed and we were unable to recover it. 00:34:26.185 [2024-07-14 14:10:03.967812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.185 [2024-07-14 14:10:03.967853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.967946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.967972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.968054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.968079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.968165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.968209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.968325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.968350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.968432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.968456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.968544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.968569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.968668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.968696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.968824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.968855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.968999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.969038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.969145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.969172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.969258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.969283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.969394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.969437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.969547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.969573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.969675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.969700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.969808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.969838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.969964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.969991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.970105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.970137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.970261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.970290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.970404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.970430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.970517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.970543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.970682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.970711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.970824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.970850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.970966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.970992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.971083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.971109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.971254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.971280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.971393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.971438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.971532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.971561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.971675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.971701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.971814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.971840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.971953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.971982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.972122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.972148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.972236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.972263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.972350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.972376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.972504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.972530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.972647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.972692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.972825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.972854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.973017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.973044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.973134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.973160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.973262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.973291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.973427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.186 [2024-07-14 14:10:03.973452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.186 qpair failed and we were unable to recover it. 00:34:26.186 [2024-07-14 14:10:03.973538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.973564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.973654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.973698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.973814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.973841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.973973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.974000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.974112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.974157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.974281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.974308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.974433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.974459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.974613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.974642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.974741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.974770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.974917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.974945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.975040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.975066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.975157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.975195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.975311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.975338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.975456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.975486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.975590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.975616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.975704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.975730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.975846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.975911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.976025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.976050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.976141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.976167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.976277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.976306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.976423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.976449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.976570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.976595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.976681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.976726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.976838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.976864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.976963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.976989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.977096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.977127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.977263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.977290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.977380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.977406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.977506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.977535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.977683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.977709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.977801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.977828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.977947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.977977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.978121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.978147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.978235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.978262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.978390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.187 [2024-07-14 14:10:03.978419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.187 qpair failed and we were unable to recover it. 00:34:26.187 [2024-07-14 14:10:03.978547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.978573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.978699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.978725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.978896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.978923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.979011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.979037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.979151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.979177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.979290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.979319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.979460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.979485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.979577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.979603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.979717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.979746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.979904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.979930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.980050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.980093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.980231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.980260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.980394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.980420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.980536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.980563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.980701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.980730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.980901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.980927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.981037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.981063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.981242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.981271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.981379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.981406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.981517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.981542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.981681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.981710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.981807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.981837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.982000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.982027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.982188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.982217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.982325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.982352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.982443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.982470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.982547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.982573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.982692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.982719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.982806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.982832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.982941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.982970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.983085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.983111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.983237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.983263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.983387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.983413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.983503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.983529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.983615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.983642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.983740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.983783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.983902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.983929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.984042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.984068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.984186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.984216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.984355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.984381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.188 [2024-07-14 14:10:03.984463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.188 [2024-07-14 14:10:03.984489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.188 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.984590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.984619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.984761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.984787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.984889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.984915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.985006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.985047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.985160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.985191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.985286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.985312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.985439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.985468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.985584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.985609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.985723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.985749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.985863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.985902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.986034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.986061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.986168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.986193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.986302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.986330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.986827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.986860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.986987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.987015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.987137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.987180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.987319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.987345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.987740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.987772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.987891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.987921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.988054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.988080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.988167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.988202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.988350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.988379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.988482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.988508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.988599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.988625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.988719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.988763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.988874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.988905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.989030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.989055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.989197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.989226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.989366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.989391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.989513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.989539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.989666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.989696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.989798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.989823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.989944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.989970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.990069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.990093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.990222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.990249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.990369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.990410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.990500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.990529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.990645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.990670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.990785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.990810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.990977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.991003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.189 qpair failed and we were unable to recover it. 00:34:26.189 [2024-07-14 14:10:03.991119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.189 [2024-07-14 14:10:03.991144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.991286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.991325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.991470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.991502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.991638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.991667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.991760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.991786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.991923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.991953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.992070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.992096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.992183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.992210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.992316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.992344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.992456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.992482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.992573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.992598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.992721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.992748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.992891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.992917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.993033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.993058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.993168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.993198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.993314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.993340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.993437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.993463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.993603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.993632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.993768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.993794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.993913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.993941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.994094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.994125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.994242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.994268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.994381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.994406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.994588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.994614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.994748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.994777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.994893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.994922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.995049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.995074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.995164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.995193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.995280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.995305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.995422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.995449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.995564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.995590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.995733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.995776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.995908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.995937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.996047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.996073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.996196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.996221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.996373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.996402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.996535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.996561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.996651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.996676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.996804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.996832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.996963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.996991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.997135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.190 [2024-07-14 14:10:03.997160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.190 qpair failed and we were unable to recover it. 00:34:26.190 [2024-07-14 14:10:03.997356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.997382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.997465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.997490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.997607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.997633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.997741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.997769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.997907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.997933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.998046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.998072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.998220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.998262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.998395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.998422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.998503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.998529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.998609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.998634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.998722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.998747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.998857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.998896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.999032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.999062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.999201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.999228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.999371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.999412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.999541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.999570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.999684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.999710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.999795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.999820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:03.999935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:03.999978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.000087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.000113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.000235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.000260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.000414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.000442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.000550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.000575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.000658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.000683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.000813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.000841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.000961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.000986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.001099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.001124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.001270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.001298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.001400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.001425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.001503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.001528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.001630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.001658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.001770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.001811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.001961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.001987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.002073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.002101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.002191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.191 [2024-07-14 14:10:04.002217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.191 qpair failed and we were unable to recover it. 00:34:26.191 [2024-07-14 14:10:04.002309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.002336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.002417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.002459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.002592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.002618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.002707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.002735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.002853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.002887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.002996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.003021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.003106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.003130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.003218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.003257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.003386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.003411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.003541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.003566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.003681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.003709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.003824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.003850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.003963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.003988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.004122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.004147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.004259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.004285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.004393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.004422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.004532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.004560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.004703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.004729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.004821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.004846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.005013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.005043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.005179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.005205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.005309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.005335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.005452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.005478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.005561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.005587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.005674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.005700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.005813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.005841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.005979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.006004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.006089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.006114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.006254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.006282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.006393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.006420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.006520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.006547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.006633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.006659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.006782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.006807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.006912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.006939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.007086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.007112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.007209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.007235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.007361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.007388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.007494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.007522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.007615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.007647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.007768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.007796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.192 [2024-07-14 14:10:04.007942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.192 [2024-07-14 14:10:04.007967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.192 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.008051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.008076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.008169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.008195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.008306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.008334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.008467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.008493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.008613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.008640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.008792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.008818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.008912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.008939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.009055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.009081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.009203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.009233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.009376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.009402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.009522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.009548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.009672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.009700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.009841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.009866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.009965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.009990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.010086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.010113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.010236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.010261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.010372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.010400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.010532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.010560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.010677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.010702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.010791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.010817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.010930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.010959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.011070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.011096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.011192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.011218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.011317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.011345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.011488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.011517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.011628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.011652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.011764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.011793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.011910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.011936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.012045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.012070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.012210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.012237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.012370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.012395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.012490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.012517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.012677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.012706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.012817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.012843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.012968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.012994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.013111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.013137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.013247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.013272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.013390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.013416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.013530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.013558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.013658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.013686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.193 [2024-07-14 14:10:04.013807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.193 [2024-07-14 14:10:04.013835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.193 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.013958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.013983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.014070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.014094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.014189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.014214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.014298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.014323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.014441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.014466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.014558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.014586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.014704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.014731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.014813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.014839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.014934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.014960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.015095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.015124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.015245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.015293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.015419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.015445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.015563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.015590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.015724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.015749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.015827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.015852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.015949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.015974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.016085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.016110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.016203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.016228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.016358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.016386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.016521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.016546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.016630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.016657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.016775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.016806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.016930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.016956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.017041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.017067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.017166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.017192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.017318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.017345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.017437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.017463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.017571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.017599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.017713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.017739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.017832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.017858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.017949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.017973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.018087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.018112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.018229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.018254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.018373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.018400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.018503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.018528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.018740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.018768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.018904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.018947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.019036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.019065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.019147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.019172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.194 [2024-07-14 14:10:04.019270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.194 [2024-07-14 14:10:04.019298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.194 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.019431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.019456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.019543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.019568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.019674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.019701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.019842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.019867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.020066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.020091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.020242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.020269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.020403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.020428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.020505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.020529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.020657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.020685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.020830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.020855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.020963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.020988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.021109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.021137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.021243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.021268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.021383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.021408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.021505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.021532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.021647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.021672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.021759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.021783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.021869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.021900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.021993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.022017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.022147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.022186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.022310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.022340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.022460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.022487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.022579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.022605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.022722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.022748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.022833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.022864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.022963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.022989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.023134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.023161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.023297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.023321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.023404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.023429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.023526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.023554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.023780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.023807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.023900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.023942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.024030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.024055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.024176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.024201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.024287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.024312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.024474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.024502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.024619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.024644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.024728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.024752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.024866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.195 [2024-07-14 14:10:04.024914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.195 qpair failed and we were unable to recover it. 00:34:26.195 [2024-07-14 14:10:04.025032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.025057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.025147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.025172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.025282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.025306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.025387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.025412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.025493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.025518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.025636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.025663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.025763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.025788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.025950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.025979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.026101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.026127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.026271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.026298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.026387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.026413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.026550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.026577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.026660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.026691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.026810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.026836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.026944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.026972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.027079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.027104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.027194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.027218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.027317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.027345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.027452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.027476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.027560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.027588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.027684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.027710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.027810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.027836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.027938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.027965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.028103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.028130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.028244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.028270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.028358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.028384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.196 qpair failed and we were unable to recover it. 00:34:26.196 [2024-07-14 14:10:04.028497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.196 [2024-07-14 14:10:04.028526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.028625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.028651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.028732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.028757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.028897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.028926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.029040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.029065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.029163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.029188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.029335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.029364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.029501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.029527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.029627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.029653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.029759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.029789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.029924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.029951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.030071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.030098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.030243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.030271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.030379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.030409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.030493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.030518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.030647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.030674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.030776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.030802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.030925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.030950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.031055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.031083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.031191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.031216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.031290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.031315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.031437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.031479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.031588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.031614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.031705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.031733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.031838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.031866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.031979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.032004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.032098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.032123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.032249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.032278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.032423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.032448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.032562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.032588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.032701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.032728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.032847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.032873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.033003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.033028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.033136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.033163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.033269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.033294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.033407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.033431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.033559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.033587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.033692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.033717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.033806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.033831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.033910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.033935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.034049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.034079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.034193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.034217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.197 qpair failed and we were unable to recover it. 00:34:26.197 [2024-07-14 14:10:04.034328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.197 [2024-07-14 14:10:04.034355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.034490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.034514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.034603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.034628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.034756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.034783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.034902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.034928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.035042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.035070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.035156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.035199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.035316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.035341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.035428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.035454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.035584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.035612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.035745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.035771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.035909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.035936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.036047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.036075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.036225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.036249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.036370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.036395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.036477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.036501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.036583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.036608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.036693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.036717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.036794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.036819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.036909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.036934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.037047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.037071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.037158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.037182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.037259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.037283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.037371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.037398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.037535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.037564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.037681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.037730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.037845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.037871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.037968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.037995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.038079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.038104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.038224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.038249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.038349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.038376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.038486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.038510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.038598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.038622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.038718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.038747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.038859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.038888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.038969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.038993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.039095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.039122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.039253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.039278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.039366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.039394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.039505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.039533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.039645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.039672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.039813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.039838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.039975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.040004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.040092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.198 [2024-07-14 14:10:04.040137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.198 qpair failed and we were unable to recover it. 00:34:26.198 [2024-07-14 14:10:04.040227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.040253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.040382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.040409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.040541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.040566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.040690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.040715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.040849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.040903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.041043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.041068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.041186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.041210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.041314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.041340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.041463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.041488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.041571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.041599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.041706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.041734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.041846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.041872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.041967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.041993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.042103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.042131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.042237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.042265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.042362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.042386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.042532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.042576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.042694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.042722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.042829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.042858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.042975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.043000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.043090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.043114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.043208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.043232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.043370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.043398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.043513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.043537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.043630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.043657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.043773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.043799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.043893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.043920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.044014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.044040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.044121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.044147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.044266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.044292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.044407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.044434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.044517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.044541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.044655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.044680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.044755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.044779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.044865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.044903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.044997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.045022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.045102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.045127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.045244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.045286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.045460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.045486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.045568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.045596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.045705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.045734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.045867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.045900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.199 [2024-07-14 14:10:04.045991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.199 [2024-07-14 14:10:04.046017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.199 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.046101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.046126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.046206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.046231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.046322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.046348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.046476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.046503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.046663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.046688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.046774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.046803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.046920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.046962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.047069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.047094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.047211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.047235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.047322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.047365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.047496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.047520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.047661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.047686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.047819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.047847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.047990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.048015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.048133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.048158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.048293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.048320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.048448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.048472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.048586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.048611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.048746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.048773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.048968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.048994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.049113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.049138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.049225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.049265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.049376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.049400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.049488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.049512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.049642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.049671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.049781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.049805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.049927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.049951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.050048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.050072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.050155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.050178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.050323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.050348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.050450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.050478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.050596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.050621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.050711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.050736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.050861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.050894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.051002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.051026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.051116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.051141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.051225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.200 [2024-07-14 14:10:04.051267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.200 qpair failed and we were unable to recover it. 00:34:26.200 [2024-07-14 14:10:04.051397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.051422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.051539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.051564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.051691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.051719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.051826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.051851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.051954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.051980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.052092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.052117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.052235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.052260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.052354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.052379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.052490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.052518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.052649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.052678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.052795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.052820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.052960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.052986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.053094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.053119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.053249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.053274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.053404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.053431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.053569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.053594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.053709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.053735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.053856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.053891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.054007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.054032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.054124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.054149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.054287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.054312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.054457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.054482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.054595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.054620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.054790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.054818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.054965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.054991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.055110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.055135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.055239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.055264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.055380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.055406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.055538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.055562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.055676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.055703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.055923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.055948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.056065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.056090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.056205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.056233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.056397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.056422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.056518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.056543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.056634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.056659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.056803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.056832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.056932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.056957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.057044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.057070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.057208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.057233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.057372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.057397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.057541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.057566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.057658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.057684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.201 [2024-07-14 14:10:04.057781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.201 [2024-07-14 14:10:04.057820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.201 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.057961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.057991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.058134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.058159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.058273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.058298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.058434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.058462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.058592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.058618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.058731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.058758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.058918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.058945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.059064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.059089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.059241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.059266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.059461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.059502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.059659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.059684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.059804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.059832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.059925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.059951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.060056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.060082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.060196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.060221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.060358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.060386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.060516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.060541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.060657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.060683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.060792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.060820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.060997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.061026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.061117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.061142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.061255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.061280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.061368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.061393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.061482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.061506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.061595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.061620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.061740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.061765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.061852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.061882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.061963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.061988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.062108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.062133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.062246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.062271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.062404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.062446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.062529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.062554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.062631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.062656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.062790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.062819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.062995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.063021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.063133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.063159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.063260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.063288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.063392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.063417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.063508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.063533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.063662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.063690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.063792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.063817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.064035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.202 [2024-07-14 14:10:04.064064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.202 qpair failed and we were unable to recover it. 00:34:26.202 [2024-07-14 14:10:04.064217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.064245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.064390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.064415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.064529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.064554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.064666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.064707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.064823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.064851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.064994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.065020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.065130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.065158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.065288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.065313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.065452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.065478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.065600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.065625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.065716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.065741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.065830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.065855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.065949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.065975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.066083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.066108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.066235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.066260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.066345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.066370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.066512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.066537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.066676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.066717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.066846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.066884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.067001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.067027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.067142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.067168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.067282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.067308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.067455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.067480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.067617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.067642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.067779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.067807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.067951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.067977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.068065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.068090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.068205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.068246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.068442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.068467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.068606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.068649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.068750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.068779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.068895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.068927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.069040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.069067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.069241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.069269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.069384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.069409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.069490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.069517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.069632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.069658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.069872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.069903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.070012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.070037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.070128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.070153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.070258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.070283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.070377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.070405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.070545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.070573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.070745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.070770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.203 [2024-07-14 14:10:04.070899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.203 [2024-07-14 14:10:04.070944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.203 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.071050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.071078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.071214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.071240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.071359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.071386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.071503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.071531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.071695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.071721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.071858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.071907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.072042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.072070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.072179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.072207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.072308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.072335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.072445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.072471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.072561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.072587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.072703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.072729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.072850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.072884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.072973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.073003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.073117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.073143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.073281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.073310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.073471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.073497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.073610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.073636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.073744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.073773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.073893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.073920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.074062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.074088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.074208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.074237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.074367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.074393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.074504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.074530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.074666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.074696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.074857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.074888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.075003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.075046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.075153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.075182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.075299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.075325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.075441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.075468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.075626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.075669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.075811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.075838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.075974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.076029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.076166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.076195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.076321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.076346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.076486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.076511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.076707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.076776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.076921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.204 [2024-07-14 14:10:04.076948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.204 qpair failed and we were unable to recover it. 00:34:26.204 [2024-07-14 14:10:04.077033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.077059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.077175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.077218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.077340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.077366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.077487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.077513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.077620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.077649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.077790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.077815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.077932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.077957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.078061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.078089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.078197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.078222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.078300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.078325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.078441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.078469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.078585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.078610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.078740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.078778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.078926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.078957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.079127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.079153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.079273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.079319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.079480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.079509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.079624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.079650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.079765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.079791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.079906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.079939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.080117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.080142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.080232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.080257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.080427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.080474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.080612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.080637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.080732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.080758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.080988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.081030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.081201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.081228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.081342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.081383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.081512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.081540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.081699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.081727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.081889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.081934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.082030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.082055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.082164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.082190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.082302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.082327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.082446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.082473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.082588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.082614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.082699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.082725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.082852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.082885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.083029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.083055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.083190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.083248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.083360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.083405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.083548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.083573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.083663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.083692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.083803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.205 [2024-07-14 14:10:04.083828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.205 qpair failed and we were unable to recover it. 00:34:26.205 [2024-07-14 14:10:04.083917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.083943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.084064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.084091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.084205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.084237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.084402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.084428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.084541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.084568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.084676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.084705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.084820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.084846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.084978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.085005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.085089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.085114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.085228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.085253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.085338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.085363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.085479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.085504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.085631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.085656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.085814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.085841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.086012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.086038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.086153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.086178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.086319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.086360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.086512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.086540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.086676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.086701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.086813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.086837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.086968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.086994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.087109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.087134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.087220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.087245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.087395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.087422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.087558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.087583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.087737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.087781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.087909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.087940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.088094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.088120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.088239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.088265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.088358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.088385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.088501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.088527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.088621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.088648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.088794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.088837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.088991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.089018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.089106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.089131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.089242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.089267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.089349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.089374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.089467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.089493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.089576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.089619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.089738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.089765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.089911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.089940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.090095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.090138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.090309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.090335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.090494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.090522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.206 [2024-07-14 14:10:04.090617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.206 [2024-07-14 14:10:04.090646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.206 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.090755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.090780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.090923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.090949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.091067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.091093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.091202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.091227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.091340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.091364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.091483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.091510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.091648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.091673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.091790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.091820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.091959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.091985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.092102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.092127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.092213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.092239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.092392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.092417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.092504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.092529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.092635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.092660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.092757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.092814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.092949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.092978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.093097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.093124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.093233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.093264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.093410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.093436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.093530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.093558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.093677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.093720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.093863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.093896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.093984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.094011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.094174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.094203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.094371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.094397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.094550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.094579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.094687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.094717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.094857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.094888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.095005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.095030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.095113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.095138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.095285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.095310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.095429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.095454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.095561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.095586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.095662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.095687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.095789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.095827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.095982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.096010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.096143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.096182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.096318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.096365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.096495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.096524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.096670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.096698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.096866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.096899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.097034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.097059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.097145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.097170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.097254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.207 [2024-07-14 14:10:04.097279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.207 qpair failed and we were unable to recover it. 00:34:26.207 [2024-07-14 14:10:04.097417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.097443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.097559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.097585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.097742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.097770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.097871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.097919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.098043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.098072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.098192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.098220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.098336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.098378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.098506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.098535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.098642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.098669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.098802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.098845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.098996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.099023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.099141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.099166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.099325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.099361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.099559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.099605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.099716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.099741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.099856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.099891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.100004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.100029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.100122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.100160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.100311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.100342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.100443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.100472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.100603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.100632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.100733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.100775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.100919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.100946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.101058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.101085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.101211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.101241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.101339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.101369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.101471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.101501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.101674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.101719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.101849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.101894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.102012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.102038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.102180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.102225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.102338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.102379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.102532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.102581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.102715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.102757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.102945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.102971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.103050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.103075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.103227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.103271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.103429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.103465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.208 [2024-07-14 14:10:04.103660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.208 [2024-07-14 14:10:04.103688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.208 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.103809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.103836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.103967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.104006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.104137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.104164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.104304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.104354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.104497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.104545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.104746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.104799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.104952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.104979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.105095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.105121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.105285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.105314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.105414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.105443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.105592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.105642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.105789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.105827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.105962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.105990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.106102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.106132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.106283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.106311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.106467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.106514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.106596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.106621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.106734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.106759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.106843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.106874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.106979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.107005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.107094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.107122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.107219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.107245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.107362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.107388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.107479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.107505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.107606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.107645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.107744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.107770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.107890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.107917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.108033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.108058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.108156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.108181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.108298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.108324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.108448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.108491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.108609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.108635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.108743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.108782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.108905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.108950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.109103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.109145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.109282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.109334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.109486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.109533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.109650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.109678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.109816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.109841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.109944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.109983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.110082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.110108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.110232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.110260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.110381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.110409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.209 [2024-07-14 14:10:04.110526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.209 [2024-07-14 14:10:04.110554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.209 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.110671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.110698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.110795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.110825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.110959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.110998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.111119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.111147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.111279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.111322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.111434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.111479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.111620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.111665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.111781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.111807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.111924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.111951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.112075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.112101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.112218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.112244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.112396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.112421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.112540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.112565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.112685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.112710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.112806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.112838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.112941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.112966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.113060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.113088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.113176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.113201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.113289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.113314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.113409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.113435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.113580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.113608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.113731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.113759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.113892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.113936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.114055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.114082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.114253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.114296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.210 [2024-07-14 14:10:04.114450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.210 [2024-07-14 14:10:04.114502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.210 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.114650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.114700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.114805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.114831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.114949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.114987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.115080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.115125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.115280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.115308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.115426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.115474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.115618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.115646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.115763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.115798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.115899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.115924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.116018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.116043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.116149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.116174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.116274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.116303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.116425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.116457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.116571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.491 [2024-07-14 14:10:04.116615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.491 qpair failed and we were unable to recover it. 00:34:26.491 [2024-07-14 14:10:04.116731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.116761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.116899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.116927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.117018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.117044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.117157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.117201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.117340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.117385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.117554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.117616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.117722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.117752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.117872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.117911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.118011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.118036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.118119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.118144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.118269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.118294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.118383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.118410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.118488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.118515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.118627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.118666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.118759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.118785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.118889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.118915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.119032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.119057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.119159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.119187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.119315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.119343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.119474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.119502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.119605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.119637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.119750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.119777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.119910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.119937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.120050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.120075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.120177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.120206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.120311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.120336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.120450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.120479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.120627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.120674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.120817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.120843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.120939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.120966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.121081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.121107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.121237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.121265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.121420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.121449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.121551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.121581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.121692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.121732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.121854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.121886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.121973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.121999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.122093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.122120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.122273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.122317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.122493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.122521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.492 [2024-07-14 14:10:04.122680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.492 [2024-07-14 14:10:04.122728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.492 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.122858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.122894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.122983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.123009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.123098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.123123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.123283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.123308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.123417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.123442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.123607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.123634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.123744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.123769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.123890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.123935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.124065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.124103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.124262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.124288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.124444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.124472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.124600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.124628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.124747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.124774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.124872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.124910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.125046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.125071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.125158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.125183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.125273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.125298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.125397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.125437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.125547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.125571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.125743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.125773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.125911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.125956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.126050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.126077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.126202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.126228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.126371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.126400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.126573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.126602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.126758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.126787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.126892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.126937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.127051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.127080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.127199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.127224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.127343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.127367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.127492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.127520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.127637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.127664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.127785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.127813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.127965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.128004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.128102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.128129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.128291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.128335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.128423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.128448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.128596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.128627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.128731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.128759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.128903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.493 [2024-07-14 14:10:04.128946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.493 qpair failed and we were unable to recover it. 00:34:26.493 [2024-07-14 14:10:04.129060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.129086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.129213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.129267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.129399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.129428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.129619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.129647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.129764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.129789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.129904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.129931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.130046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.130071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.130151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.130176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.130329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.130356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.130506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.130534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.130644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.130674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.130801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.130829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.130958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.130983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.131073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.131098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.131304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.131364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.131577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.131626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.131790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.131819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.131940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.131965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.132049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.132074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.132188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.132213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.132364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.132412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.132608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.132658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.132787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.132815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.132904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.132947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.133066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.133090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.133182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.133209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.133307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.133349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.133500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.133548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.133681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.133710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.133809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.133838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.133958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.133984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.134102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.134127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.134243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.134271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.134378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.134404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.134544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.134573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.134697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.134725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.134886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.134911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.135026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.135051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.135137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.135179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.135289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.135314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.135402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.494 [2024-07-14 14:10:04.135430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.494 qpair failed and we were unable to recover it. 00:34:26.494 [2024-07-14 14:10:04.135569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.135598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.135727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.135756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.135887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.135916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.136023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.136048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.136167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.136193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.136313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.136339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.136455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.136483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.136580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.136609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.136747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.136775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.136887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.136912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.137024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.137049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.137163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.137188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.137328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.137356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.137486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.137519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.137655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.137696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.137848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.137889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.138000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.138026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.138117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.138142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.138273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.138301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.138409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.138434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.138577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.138608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.138733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.138776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.138871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.138903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.138995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.139021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.139136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.139181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.139322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.139368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.139510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.139539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.139704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.139732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.139847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.139893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.139993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.140019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.140108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.140134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.140237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.140265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.140397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.140449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.495 qpair failed and we were unable to recover it. 00:34:26.495 [2024-07-14 14:10:04.140564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.495 [2024-07-14 14:10:04.140590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.140743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.140772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.140901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.140926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.141068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.141093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.141209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.141234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.141376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.141426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.141573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.141620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.141772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.141807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.141959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.141987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.142136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.142180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.142308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.142359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.142518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.142565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.142653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.142679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.142796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.142822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.142967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.143006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.143158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.143185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.143277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.143303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.143391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.143416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.143526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.143564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.143686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.143735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.143870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.143917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.144025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.144051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.144184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.144212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.144333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.144361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.144498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.144526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.144633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.144661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.144763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.144790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.144934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.144963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.145090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.145119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.496 [2024-07-14 14:10:04.145285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.496 [2024-07-14 14:10:04.145311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.496 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.145424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.145449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.145597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.145625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.145750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.145777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.145882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.145927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.146038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.146067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.146214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.146239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.146327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.146368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.146492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.146519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.146647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.146689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.146790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.146819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.146934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.146960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.147078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.147103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.147195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.147220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.147348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.147375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.147500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.147529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.147659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.147688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.147785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.147813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.147945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.147970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.148055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.148080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.148246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.148273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.148380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.148421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.148519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.148546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.148681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.148709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.148837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.148862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.148965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.148990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.149117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.149156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.149278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.149305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.149430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.149474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.149630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.149659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.149789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.149815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.149923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.149951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.150070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.150096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.150234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.150272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.150416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.150445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.150561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.150587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.150755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.150784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.150912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.150939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.151024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.151048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.151191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.151221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.151334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.497 [2024-07-14 14:10:04.151377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.497 qpair failed and we were unable to recover it. 00:34:26.497 [2024-07-14 14:10:04.151478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.151507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.151641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.151670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.151849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.151897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.152023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.152050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.152140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.152167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.152289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.152314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.152423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.152477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.152599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.152625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.152769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.152800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.152963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.153001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.153095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.153121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.153248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.153273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.153397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.153434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.153557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.153582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.153702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.153729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.153889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.153934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.154057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.154084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.154198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.154224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.154372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.154401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.154529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.154558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.154688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.154717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.154825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.154855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.155016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.155041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.155213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.155255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.155448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.155497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.155608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.155633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.155750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.155778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.155946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.155972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.156064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.156089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.156238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.156263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.156399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.156427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.156563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.156610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.156739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.156767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.156903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.156960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.157097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.157136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.157242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.157272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.157378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.157404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.157517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.157543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.157673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.157711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.157804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.157830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.157919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.498 [2024-07-14 14:10:04.157946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.498 qpair failed and we were unable to recover it. 00:34:26.498 [2024-07-14 14:10:04.158028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.158053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.158142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.158167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.158313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.158338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.158449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.158501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.158609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.158639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.158775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.158808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.158925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.158952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.159061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.159087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.159199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.159249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.159435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.159484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.159601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.159627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.159766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.159796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.159915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.159942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.160033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.160058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.160149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.160197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.160311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.160336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.160524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.160553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.160705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.160735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.160873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.160904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.161019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.161044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.161156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.161199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.161366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.161394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.161545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.161573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.161696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.161725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.161830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.161855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.161956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.161984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.162082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.162108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.162231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.162257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.162401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.162427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.162546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.162575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.162726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.162753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.162887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.162918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.163053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.163079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.163173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.163199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.163312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.163338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.163500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.163530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.163640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.163665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.163776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.163805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.163928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.163954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.164048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.164073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.499 [2024-07-14 14:10:04.164157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.499 [2024-07-14 14:10:04.164182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.499 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.164299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.164346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.164504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.164532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.164656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.164683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.164795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.164838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.164991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.165018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.165138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.165183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.165332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.165384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.165491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.165535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.165666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.165694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.165852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.165888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.166031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.166057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.166181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.166220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.166337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.166386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.166537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.166583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.166711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.166743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.166898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.166940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.167032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.167057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.167210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.167237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.167350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.167378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.167489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.167515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.167710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.167758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.167889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.167917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.168050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.168075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.168172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.168197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.168329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.168358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.168528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.168556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.168682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.168710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.168836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.168864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.169024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.169048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.169161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.169203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.169359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.169406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.169555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.169605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.169703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.169731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.169855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.169889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.500 qpair failed and we were unable to recover it. 00:34:26.500 [2024-07-14 14:10:04.170004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.500 [2024-07-14 14:10:04.170028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.170117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.170144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.170238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.170282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.170393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.170418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.170563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.170592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.170738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.170781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.170938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.170977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.171098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.171124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.171208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.171233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.171384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.171438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.171583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.171632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.171753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.171781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.171896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.171952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.172089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.172127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.172246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.172290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.172415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.172441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.172588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.172614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.172733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.172759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.172888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.172915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.173014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.173042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.173151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.173190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.173288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.173314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.173426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.173451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.173565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.173590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.173675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.173702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.173795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.173821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.173916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.173944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.174037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.174063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.174201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.174230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.174336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.174365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.174517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.174546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.174673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.174703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.174861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.174897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.175033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.175058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.175189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.175219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.175314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.175342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.175448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.175479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.175664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.175710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.175838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.175884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.176009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.176036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.176119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.501 [2024-07-14 14:10:04.176145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.501 qpair failed and we were unable to recover it. 00:34:26.501 [2024-07-14 14:10:04.176248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.176295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.176457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.176505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.176657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.176706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.176819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.176865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.177017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.177044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.177134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.177160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.177242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.177268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.177427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.177478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.177643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.177673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.177784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.177814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.177959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.177987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.178104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.178130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.178230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.178258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.178437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.178480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.178622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.178668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.178787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.178813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.178951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.178990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.179114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.179141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.179303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.179332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.179430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.179459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.179614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.179663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.179819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.179846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.179987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.180027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.180147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.180195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.180393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.180421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.180573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.180601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.180751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.180779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.180919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.180944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.181059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.181086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.181204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.181246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.181415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.181451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.181584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.181633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.181765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.181793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.181932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.181957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.182047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.182072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.182190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.182221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.182323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.182351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.182501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.182529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.182615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.182642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.182737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.182764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.502 [2024-07-14 14:10:04.182861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.502 [2024-07-14 14:10:04.182912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.502 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.183003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.183028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.183142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.183166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.183251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.183276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.183398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.183425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.183537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.183562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.183704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.183732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.183855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.183889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.184006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.184031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.184125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.184150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.184298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.184323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.184459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.184487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.184575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.184602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.184718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.184762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.184885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.184914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.185034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.185060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.185145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.185187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.185300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.185326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.185493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.185521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.185625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.185654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.185743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.185772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.185896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.185925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.186084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.186116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.186264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.186290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.186406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.186432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.186543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.186574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.186668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.186693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.186804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.186831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.186974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.186999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.187076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.187101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.187212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.187237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.187339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.187367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.187482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.187507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.187643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.187671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.187764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.187796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.187919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.187958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.188058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.188085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.188202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.188246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.188380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.188425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.188536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.188582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.188681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.188707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.503 qpair failed and we were unable to recover it. 00:34:26.503 [2024-07-14 14:10:04.188797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.503 [2024-07-14 14:10:04.188824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.188931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.188970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.189087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.189121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.189221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.189249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.189377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.189426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.189580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.189608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.189736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.189765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.189873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.189915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.190021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.190053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.190166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.190195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.190297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.190322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.190470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.190513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.190647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.190717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.190854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.190895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.191008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.191034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.191138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.191166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.191280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.191328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.191424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.191452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.191551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.191579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.191712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.191743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.191851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.191889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.192001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.192028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.192125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.192151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.192283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.192327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.192458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.192486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.192653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.192705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.192845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.192895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.193040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.193069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.193220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.193249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.193347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.193376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.193506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.193572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.193695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.193722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.193893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.193931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.194020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.194046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.504 [2024-07-14 14:10:04.194143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.504 [2024-07-14 14:10:04.194186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.504 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.194304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.194346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.194505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.194533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.194657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.194702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.194807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.194833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.194950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.194989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.195084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.195111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.195244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.195275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.195401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.195429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.195521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.195551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.195652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.195680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.195775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.195805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.195943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.195981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.196117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.196156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.196271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.196302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.196407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.196436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.196562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.196592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.196692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.196721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.196835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.196886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.197004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.197033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.197165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.197192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.197334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.197377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.197481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.197510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.197660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.197703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.197794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.197820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.197945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.197973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.198099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.198127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.198249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.198277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.198399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.198425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.198566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.198592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.198676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.198703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.198793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.198819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.198951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.198990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.199082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.199108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.199216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.199244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.199374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.199402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.199522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.199550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.199662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.199708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.199873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.199905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.200005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.200031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.200122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.200163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.505 qpair failed and we were unable to recover it. 00:34:26.505 [2024-07-14 14:10:04.200286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.505 [2024-07-14 14:10:04.200319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.200516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.200569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.200693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.200722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.200824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.200850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.200980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.201018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.201120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.201163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.201299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.201324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.201417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.201442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.201535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.201560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.201690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.201717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.201810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.201837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.201983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.202012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.202109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.202138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.202228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.202270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.202394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.202422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.202535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.202564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.202705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.202748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.202846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.202882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.202997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.203022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.203138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.203163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.203266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.203293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.203421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.203450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.203577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.203605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.203702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.203730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.203850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.203896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.203993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.204021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.204105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.204130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.204229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.204262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.204363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.204388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.204541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.204575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.204683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.204712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.204864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.204909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.205011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.205038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.205139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.205166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.205297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.205342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.205444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.205472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.205580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.205608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.205726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.205752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.205845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.205873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.205970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.205995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.206084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.206109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.506 qpair failed and we were unable to recover it. 00:34:26.506 [2024-07-14 14:10:04.206225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.506 [2024-07-14 14:10:04.206251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.206391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.206419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.206546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.206574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.206671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.206700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.206816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.206844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.206943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.206969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.207077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.207105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.207292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.207337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.207475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.207519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.207636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.207661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.207751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.207777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.207886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.207926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.208018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.208044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.208159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.208188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.208312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.208341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.208525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.208576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.208690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.208716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.208806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.208832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.208923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.208949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.209031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.209057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.209198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.209242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.209404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.209447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.209581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.209609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.209758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.209796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.209898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.209927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.210021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.210047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.210153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.210187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.210285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.210328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.210443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.210486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.210590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.210620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.210761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.210804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.210919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.210962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.211078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.211124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.211267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.211310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.211448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.211479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.211598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.211650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.211782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.211812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.211929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.211955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.212044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.212068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.212161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.212204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.507 [2024-07-14 14:10:04.212370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.507 [2024-07-14 14:10:04.212405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.507 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.212576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.212621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.212720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.212748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.212868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.212906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.213015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.213040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.213178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.213205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.213328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.213370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.213494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.213521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.213622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.213652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.213781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.213810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.213974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.214012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.214138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.214182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.214305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.214334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.214470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.214500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.214629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.214657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.214757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.214785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.214919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.214961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.215052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.215079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.215162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.215188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.215307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.215332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.215519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.215544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.215714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.215743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.215854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.215903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.216049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.216077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.216192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.216218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.216303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.216328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.216505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.216559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.216666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.216691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.216865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.216903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.217014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.217041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.217130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.217156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.217294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.217331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.217511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.217540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.217736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.217791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.217886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.217915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.218027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.218053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.218152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.218179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.218264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.218289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.218401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.218450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.218608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.218670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.218823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.508 [2024-07-14 14:10:04.218853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.508 qpair failed and we were unable to recover it. 00:34:26.508 [2024-07-14 14:10:04.218973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.219012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.219133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.219159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.219279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.219305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.219396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.219420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.219509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.219538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.219652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.219677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.219768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.219795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.219880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.219907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.220028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.220055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.220163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.220191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.220319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.220348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.220475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.220517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.220705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.220754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.220860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.220905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.220997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.221024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.221171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.221215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.221321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.221347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.221514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.221543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.221637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.221665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.221788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.221815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.221916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.221942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.222037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.222062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.222143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.222168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.222260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.222295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.222474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.222502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.222613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.222655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.222807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.222833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.222964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.222989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.223080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.223105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.223202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.223227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.223320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.223362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.223482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.223526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.223656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.223685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.223775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.223803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.223958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.223985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.509 qpair failed and we were unable to recover it. 00:34:26.509 [2024-07-14 14:10:04.224080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.509 [2024-07-14 14:10:04.224107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.224221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.224249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.224380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.224409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.224534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.224562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.224664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.224698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.224864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.224928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.225040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.225078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.225209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.225237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.225364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.225393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.225528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.225572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.225706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.225749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.225917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.225944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.226040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.226065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.226182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.226207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.226290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.226341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.226440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.226469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.226628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.226656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.226761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.226788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.226953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.226992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.227091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.227119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.227246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.227272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.227417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.227447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.227564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.227588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.227707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.227736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.227854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.227907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.227996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.228022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.228113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.228139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.228242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.228272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.228376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.228402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.228549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.228579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.228677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.228707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.228854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.228900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.229002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.229029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.229149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.229175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.229344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.229388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.229524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.229578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.229704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.229730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.229820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.229847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.229997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.230042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.230127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.230153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.230291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.510 [2024-07-14 14:10:04.230334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.510 qpair failed and we were unable to recover it. 00:34:26.510 [2024-07-14 14:10:04.230418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.230444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.230553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.230591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.230708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.230745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.230838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.230864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.231017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.231043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.231132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.231157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.231247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.231272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.231382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.231429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.231532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.231571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.231688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.231716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.231808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.231834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.231931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.231977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.232119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.232162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.232298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.232348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.232457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.232485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.232607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.232634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.232715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.232741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.232835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.232861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.232963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.232989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.233105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.233130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.233254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.233280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.233370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.233396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.233536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.233562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.233650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.233677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.233809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.233837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.233948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.233978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.234087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.234116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.234261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.234313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.234493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.234547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.234653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.234679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.234812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.234855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.234970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.234997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.235100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.235128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.235307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.235359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.235475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.235527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.235616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.235644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.235761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.235789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.235909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.235935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.236012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.236037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.236119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.236162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.236246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.511 [2024-07-14 14:10:04.236274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.511 qpair failed and we were unable to recover it. 00:34:26.511 [2024-07-14 14:10:04.236360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.236387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.236494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.236521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.236607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.236634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.236748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.236776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.236882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.236913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.237033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.237059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.237206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.237235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.237365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.237394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.237533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.237562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.237674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.237704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.237805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.237830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.237930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.237955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.238048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.238074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.238212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.238248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.238446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.238494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.238614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.238642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.238769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.238817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.238967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.239007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.239109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.239138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.239269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.239298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.239450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.239479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.239630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.239682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.239799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.239828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.239947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.239975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.240075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.240100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.240220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.240246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.240378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.240407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.240545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.240575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.240665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.240695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.240845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.240895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.241037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.241064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.241202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.241232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.241357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.241386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.241511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.241556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.241669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.241694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.241782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.241808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.241924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.241950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.242034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.242058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.242146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.242171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.242266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.242291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.512 [2024-07-14 14:10:04.242379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.512 [2024-07-14 14:10:04.242407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.512 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.242553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.242582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.242702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.242730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.242864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.242898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.243003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.243029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.243114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.243158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.243290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.243326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.243456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.243495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.243630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.243669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.243766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.243803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.243928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.243957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.244075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.244101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.244219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.244263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.244366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.244395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.244543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.244588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.244673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.244699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.244810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.244841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.244941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.244967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.245060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.245085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.245207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.245233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.245317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.245343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.245456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.245484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.245580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.245605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.245725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.245750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.245850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.245881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.245976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.246001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.246086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.246111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.246212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.246242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.246357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.246382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.246493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.246526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.246630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.246656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.246744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.246771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.246895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.246921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.247036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.247062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.247179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.247204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.513 [2024-07-14 14:10:04.247296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.513 [2024-07-14 14:10:04.247321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.513 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.247434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.247460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.247553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.247579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.247687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.247712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.247792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.247818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.247909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.247935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.248026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.248054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.248140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.248167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.248273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.248322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.248431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.248459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.248594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.248627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.248709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.248735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.248823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.248849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.248969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.248995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.249110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.249136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.249229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.249254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.249369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.249395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.249506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.249531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.249654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.249681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.249773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.249802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.249967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.250007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.250107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.250135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.250307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.250334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.250425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.250451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.250622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.250672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.250763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.250792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.250943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.250970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.251077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.251106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.251239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.251275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.251413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.251443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.251566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.251593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.251706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.251731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.251844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.251870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.252018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.252063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.252189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.252233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.252379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.252408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.252494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.252520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.252611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.252637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.252729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.252754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.252853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.252891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.253007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.514 [2024-07-14 14:10:04.253039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.514 qpair failed and we were unable to recover it. 00:34:26.514 [2024-07-14 14:10:04.253136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.253162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.253254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.253280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.253376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.253402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.253491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.253519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.253634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.253660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.253765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.253803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.253929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.253956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.254081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.254126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.254215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.254242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.254333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.254359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.254499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.254526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.254671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.254716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.254807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.254834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.254936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.254963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.255056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.255081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.255192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.255219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.255343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.255370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.255471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.255499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.255605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.255649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.255782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.255808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.255942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.255969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.256086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.256123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.256279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.256321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.256480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.256529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.256655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.256683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.256823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.256849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.256981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.257010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.257125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.257152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.257275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.257302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.257391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.257419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.257545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.257573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.257725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.257772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.257866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.257898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.257988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.258013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.258116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.258164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.258292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.258339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.258451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.258477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.258560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.258586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.258694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.258719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.515 [2024-07-14 14:10:04.258835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.515 [2024-07-14 14:10:04.258860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.515 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.258980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.259006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.259087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.259112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.259258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.259283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.259370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.259395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.259478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.259504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.259626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.259652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.259739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.259766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.259887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.259913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.260021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.260046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.260136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.260162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.260298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.260324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.260417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.260443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.260533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.260558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.260711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.260737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.260854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.260890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.261015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.261042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.261134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.261160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.261256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.261281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.261411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.261439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.261562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.261610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.261711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.261738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.261860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.261895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.262029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.262058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.262210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.262254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.262368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.262394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.262506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.262531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.262621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.262646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.262731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.262757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.262882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.262909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.262995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.263021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.263158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.263184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.263281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.263308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.263439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.263466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.263583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.263609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.263723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.263753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.263884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.263910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.263996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.264022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.264104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.264131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.264228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.264254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.264365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.264390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.516 qpair failed and we were unable to recover it. 00:34:26.516 [2024-07-14 14:10:04.264491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.516 [2024-07-14 14:10:04.264516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.264624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.264663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.264810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.264837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.264941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.264970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.265062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.265093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.265179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.265204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.265292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.265317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.265424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.265476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.265593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.265619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.265773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.265798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.265891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.265917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.266029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.266073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.266240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.266293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.266394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.266422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.266533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.266560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.266644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.266669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.266786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.266812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.266906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.266933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.267028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.267053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.267137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.267162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.267289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.267314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.267411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.267459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.267597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.267623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.267709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.267734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.267829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.267854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.267947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.267975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.268079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.268119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.268267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.268297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.268433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.268463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.268609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.268660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.268808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.268845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.268961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.268990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.269105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.269133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.269266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.269312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.269440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.517 [2024-07-14 14:10:04.269473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.517 qpair failed and we were unable to recover it. 00:34:26.517 [2024-07-14 14:10:04.269618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.269662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.269782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.269808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.269946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.269975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.270080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.270107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.270213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.270241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.270328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.270354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.270439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.270465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.270574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.270600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.270731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.270778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.270874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.270907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.270999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.271024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.271114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.271140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.271258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.271284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.271391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.271419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.271551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.271579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.271701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.271744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.271857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.271886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.271973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.271998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.272082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.272107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.272213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.272241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.272334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.272362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.272452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.272480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.272567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.272608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.272706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.272734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.272831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.272858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.272972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.272997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.273079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.273111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.273260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.273287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.273387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.273412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.273523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.273551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.273635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.273663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.273818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.273868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.273977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.274004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.274118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.274144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.274249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.274277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.274379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.274404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.274517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.274542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.274649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.274677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.274790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.274816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.274940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.274967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.275066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.518 [2024-07-14 14:10:04.275092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.518 qpair failed and we were unable to recover it. 00:34:26.518 [2024-07-14 14:10:04.275188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.275217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.275309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.275335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.275434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.275460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.275580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.275605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.275720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.275746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.275838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.275866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.275972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.276000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.276094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.276120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.276261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.276287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.276395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.276440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.276560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.276585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.276670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.276697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.276812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.276843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.276963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.276993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.277090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.277118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.277260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.277288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.277391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.277419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.277510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.277537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.277663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.277691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.277790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.277818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.277928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.277954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.278043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.278068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.278147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.278191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.278342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.278370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.278472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.278509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.278608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.278636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.278794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.278819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.278914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.278940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.279031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.279056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.279197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.279225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.279323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.279352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.279478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.279506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.279594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.279621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.279756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.279799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.279960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.279991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.280090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.280128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.280277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.280307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.519 [2024-07-14 14:10:04.280424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.519 [2024-07-14 14:10:04.280453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.519 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.280593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.280636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.280753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.280784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.280884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.280909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.281000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.281025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.281127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.281155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.281281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.281309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.281450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.281499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.281598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.281626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.281754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.281781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.281935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.281978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.282092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.282116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.282242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.282267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.282369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.282397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.282534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.282577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.282666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.282693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.282780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.282808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.282918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.282943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.283032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.283058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.283159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.283215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.283365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.283395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.283525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.283553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.283703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.283731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.283836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.283861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.283987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.284017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.284100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.284126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.284223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.284249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.284345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.284370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.284505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.284534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.284631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.284664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.284764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.284807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.284962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.284987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.285099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.285124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.285216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.285241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.285378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.285406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.285601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.285629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.285751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.285778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.285961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.285990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.520 [2024-07-14 14:10:04.286086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.520 [2024-07-14 14:10:04.286112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.520 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.286223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.286249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.286358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.286386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.286586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.286614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.286768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.286810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.286977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.287005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.287094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.287120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.287230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.287259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.287412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.287459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.287641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.287692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.287822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.287850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.287968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.287996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.288084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.288109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.288214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.288238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.288411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.288464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.288654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.288704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.288792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.288829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.288977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.289002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.289107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.289146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.289333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.289378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.289554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.289607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.289723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.289749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.289865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.289897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.290030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.290074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.290181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.290209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.290342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.290367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.290535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.290574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.290673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.290699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.290821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.290846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.290976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.291001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.291086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.291111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.291231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.291274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.291370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.291395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.291512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.291536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.291652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.291677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.291769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.291797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.291900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.291927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-14 14:10:04.292057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.521 [2024-07-14 14:10:04.292099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.292236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.292266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.292406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.292436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.292565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.292594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.292728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.292759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.292904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.292959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.293063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.293089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.293212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.293238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.293403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.293431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.293539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.293565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.293706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.293737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.293864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.293904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.294014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.294041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.294153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.294179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.294360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.294409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.294552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.294588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.294737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.294766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.294902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.294946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.295058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.295084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.295207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.295233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.295382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.295411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.295568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.295602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.295733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.295764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.295891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.295947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.296042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.296069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.296159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.296184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.296340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.296392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.296545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.296594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.296723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.296753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.296851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.296888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.297010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.297036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.297149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.297175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.297327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.297353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-14 14:10:04.297493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.522 [2024-07-14 14:10:04.297523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.297650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.297679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.297780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.297810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.297947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.297973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.298064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.298090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.298228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.298262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.298452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.298481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.298580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.298608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.298770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.298813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.298958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.298984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.299078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.299103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.299196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.299221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.299398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.299447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.299546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.299586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.299699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.299729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.299883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.299910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.300039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.300064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.300200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.300230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.300429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.300479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.300592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.300619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.300791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.300820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.300963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.300990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.301078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.301103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.301231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.301258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.301386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.301416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.301536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.301565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.301720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.301749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.301905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.301945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.302052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.302096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.302262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.302293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.302446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.302495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.302608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.302646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.302764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.302790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.302902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.302944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.303037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.303062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.303177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.303202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.303349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.303376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-14 14:10:04.303496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.523 [2024-07-14 14:10:04.303521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.303615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.303641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.303747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.303772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.303891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.303918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.304009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.304034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.304126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.304152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.304262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.304287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.304366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.304391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.304468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.304493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.304584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.304609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.304733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.304762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.304860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.304893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.304987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.305013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.305103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.305130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.305220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.305247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.305381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.305412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.305553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.305583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.305791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.305849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.305971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.306009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.306131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.306157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.306322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.306350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.306539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.306568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.306659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.306687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.306819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.306847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.306987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.307026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.307125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.307153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.307317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.307344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.307520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.307572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.307704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.307734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.307854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.307891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.308032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.308058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.308190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.308233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.308421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.308471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.308570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.308598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.308739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.308765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.308855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.308887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.309002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.309028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.309132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.309161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.524 qpair failed and we were unable to recover it. 00:34:26.524 [2024-07-14 14:10:04.309260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.524 [2024-07-14 14:10:04.309289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.309383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.309418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.309555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.309597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.309678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.309704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.309830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.309857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.310014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.310053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.310149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.310176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.310299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.310325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.310471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.310497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.310610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.310636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.310779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.310805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.310914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.310941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.311036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.311062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.311147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.311173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.311255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.311280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.311398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.311423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.311579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.311628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.311728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.311754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.311892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.311918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.312061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.312086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.312177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.312207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.312298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.312323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.312464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.312509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.312659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.312690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.312816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.312845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.312952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.312995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.313106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.313134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.313241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.313271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.313407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.313433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.313539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.313564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.313696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.313722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.313866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.313902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.314028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.314053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.314142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.314167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.314272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.314297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.314399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.314424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.314506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.314532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.314617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.314644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.314781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.314807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.314928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.314954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.525 qpair failed and we were unable to recover it. 00:34:26.525 [2024-07-14 14:10:04.315037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.525 [2024-07-14 14:10:04.315062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.315181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.315205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.315300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.315325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.315408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.315433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.315522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.315546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.315690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.315718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.315824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.315853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.315985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.316032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.316181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.316208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.316334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.316378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.316549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.316592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.316677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.316703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.316841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.316903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.317037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.317065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.317229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.317257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.317520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.317545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.317658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.317683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.317772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.317814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.317948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.317974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.318066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.318090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.318218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.318252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.318365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.318407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.318509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.318533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.318625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.318650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.318768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.318793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.318901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.318941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.319039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.319066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.319186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.319214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.319305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.319331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.319518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.319544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.319686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.319728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.319859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.319896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.526 qpair failed and we were unable to recover it. 00:34:26.526 [2024-07-14 14:10:04.320007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.526 [2024-07-14 14:10:04.320034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.320158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.320184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.320295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.320322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.320467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.320496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.320660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.320686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.320805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.320831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.320971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.321010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.321135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.321161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.321282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.321323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.321417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.321444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.321569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.321612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.321746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.321794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.321923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.321948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.322069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.322094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.322184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.322209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.322345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.322375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.322520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.322545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.322658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.322683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.322766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.322791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.322905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.322931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.323016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.323041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.323153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.323188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.323285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.323310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.323390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.323415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.323512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.323537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.323623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.323648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.323768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.323793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.323899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.323925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.324037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.324063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.324155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.324180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.324272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.324298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.324397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.324421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.324508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.324533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.324618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.324643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.324760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.324787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.324892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.324919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.325032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.325057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.325149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.325174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.325273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.527 [2024-07-14 14:10:04.325298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.527 qpair failed and we were unable to recover it. 00:34:26.527 [2024-07-14 14:10:04.325400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.325425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.325511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.325536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.325619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.325644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.325771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.325811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.325932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.325971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.326061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.326088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.326207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.326233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.326324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.326350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.326472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.326510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.326660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.326688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.326791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.326816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.326909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.326935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.327051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.327076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.327186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.327211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.327297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.327322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.327408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.327433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.327544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.327594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.327691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.327718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.327828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.327854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.327976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.328004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.328086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.328112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.328233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.328259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.328363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.328392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.328515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.328544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.328697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.328726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.328862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.328900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.329015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.329044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.329129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.329154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.329337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.329381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.329470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.329496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.329591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.329616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.329730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.329756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.329849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.329892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.330014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.330039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.330155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.330180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.330271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.528 [2024-07-14 14:10:04.330296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.528 qpair failed and we were unable to recover it. 00:34:26.528 [2024-07-14 14:10:04.330397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.330423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.330564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.330590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.330708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.330735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.330903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.330942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.331066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.331094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.331188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.331214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.331304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.331330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.331444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.331474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.331636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.331683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.331780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.331806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.331921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.331948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.332089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.332134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.332295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.332338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.332454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.332481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.332625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.332650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.332743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.332768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.332900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.332926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.333005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.333030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.333110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.333135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.333257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.333284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.333406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.333431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.333577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.333602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.333695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.333721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.333803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.333829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.333974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.334000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.334119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.334144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.334256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.334282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.334421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.334446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.334579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.334604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.334722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.334750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.334842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.334867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.334995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.335021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.335132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.335157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.335276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.335301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.335424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.335450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.335614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.335660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.335779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.335804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.335948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.335977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.336105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.336149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.336275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.336319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.529 qpair failed and we were unable to recover it. 00:34:26.529 [2024-07-14 14:10:04.336481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.529 [2024-07-14 14:10:04.336528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.336653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.336679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.336789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.336814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.336934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.336960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.337040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.337066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.337182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.337215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.337321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.337347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.337466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.337496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.337609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.337634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.337747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.337773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.337902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.337929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.338045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.338070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.338186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.338211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.338294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.338319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.338450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.338475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.338565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.338590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.338704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.338731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.338812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.338838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.338964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.338990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.339084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.339110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.339202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.339228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.339337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.339363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.339475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.339502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.339596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.339622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.339742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.339767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.339895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.339920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.340036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.340061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.340151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.340176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.340264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.340290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.340408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.340433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.340534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.340559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.340724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.340748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.340858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.340890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.340977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.341002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.341138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.341171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.341286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.341313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.341429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.341453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.341573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.341600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.341701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.341729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.341892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.341943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.342036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.342061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.530 qpair failed and we were unable to recover it. 00:34:26.530 [2024-07-14 14:10:04.342169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.530 [2024-07-14 14:10:04.342194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.342345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.342372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.342482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.342511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.342642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.342670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.342790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.342833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.343020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.343059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.343143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.343170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.343305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.343350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.343461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.343509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.343655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.343702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.343792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.343818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.343952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.343990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.344090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.344117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.344263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.344289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.344369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.344394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.344487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.344514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.344597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.344623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.344759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.344785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.344914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.344969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.345082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.345121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.345242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.345293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.345441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.345489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.345653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.345681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.345812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.345837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.345978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.346023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.346137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.346162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.346284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.346309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.346453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.346479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.346572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.346598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.346763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.346801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.346938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.346966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.347058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.347085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.347173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.347198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.347313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.347338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.531 [2024-07-14 14:10:04.347495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.531 [2024-07-14 14:10:04.347524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.531 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.347642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.347671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.347787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.347829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.347946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.347973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.348102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.348145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.348291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.348316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.348436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.348460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.348597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.348625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.348776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.348804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.348953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.348980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.349068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.349093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.349242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.349285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.349441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.349494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.349616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.349641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.349786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.349811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.349926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.349952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.350036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.350060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.350156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.350181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.350312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.350362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.350460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.350488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.350628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.350652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.350794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.350819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.350941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.350967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.351065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.351104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.351261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.351287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.351416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.351441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.351522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.351547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.351665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.351690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.351820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.351859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.351994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.352021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.352131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.352171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.352269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.352296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.352436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.352478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.352603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.352632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.352770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.352795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.352934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.352973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.353107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.353146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.353264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.353302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.353387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.353413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.353526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.353551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.353674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.532 [2024-07-14 14:10:04.353700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.532 qpair failed and we were unable to recover it. 00:34:26.532 [2024-07-14 14:10:04.353817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.353844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.353963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.353989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.354079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.354104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.354202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.354229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.354317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.354343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.354459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.354485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.354599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.354626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.354717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.354748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.354871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.354923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.355009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.355034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.355158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.355183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.355269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.355294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.355454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.355488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.355622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.355647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.355760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.355785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.355923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.355964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.356089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.356114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.356212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.356236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.356428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.356456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.356645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.356673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.356820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.356845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.356966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.356991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.357108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.357133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.357234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.357259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.357411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.357437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.357572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.357615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.357715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.357754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.357898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.357936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.358086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.358112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.358237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.358262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.358354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.358379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.358464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.358488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.358603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.358629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.358772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.358811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.358965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.358993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.359088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.359114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.359214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.359257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.359385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.359416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.359561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.359588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.359727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.359770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.359908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.533 [2024-07-14 14:10:04.359948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.533 qpair failed and we were unable to recover it. 00:34:26.533 [2024-07-14 14:10:04.360047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.360074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.360198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.360243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.360355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.360381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.360513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.360562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.360679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.360705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.360828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.360856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.360980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.361009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.361116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.361146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.361298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.361347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.361441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.361470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.361597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.361623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.361755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.361783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.361923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.361967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.362109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.362135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.362249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.362279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.362446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.362472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.362589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.362614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.362740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.362783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.362868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.362901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.362993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.363018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.363137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.363162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.363311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.363336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.363458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.363483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.363598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.363625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.363741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.363766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.363872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.363918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.364039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.364067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.364181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.364207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.364318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.364344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.364459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.364485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.364598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.364624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.364771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.364797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.364892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.364918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.365038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.365063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.365145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.365169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.365300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.365325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.365406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.365430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.365549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.365574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.365662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.365691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.365817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.365855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.534 [2024-07-14 14:10:04.365961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.534 [2024-07-14 14:10:04.365988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.534 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.366082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.366108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.366204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.366231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.366334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.366372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.366541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.366591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.366718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.366748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.366912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.366951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.367069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.367095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.367217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.367242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.367328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.367353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.367465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.367490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.367584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.367611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.367705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.367732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.367832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.367871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.367974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.368001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.368090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.368116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.368209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.368235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.368392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.368421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.368544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.368592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.368707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.368732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.368845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.368870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.368969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.368995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.369088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.369114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.369224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.369272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.369408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.369443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.369561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.369617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.369771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.369820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.369914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.369941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.370054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.370079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.370217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.370246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.370362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.370391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.370507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.370544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.370649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.370675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.370790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.370816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.370919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.370945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.371031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.371057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.371214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.371253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.535 [2024-07-14 14:10:04.371381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.535 [2024-07-14 14:10:04.371408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.535 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.371495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.371521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.371619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.371647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.371746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.371784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.371919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.371947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.372062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.372087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.372205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.372230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.372347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.372372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.372503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.372531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.372662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.372694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.372859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.372895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.373014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.373039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.373156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.373183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.373283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.373311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.373448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.373473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.373562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.373588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.373678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.373706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.373796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.373823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.373982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.374009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.374117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.374144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.374262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.374288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.374419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.374445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.374564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.374590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.374680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.374705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.374819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.374844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.374934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.374960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.536 qpair failed and we were unable to recover it. 00:34:26.536 [2024-07-14 14:10:04.375047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.536 [2024-07-14 14:10:04.375072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.375206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.375234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.375327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.375355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.375451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.375494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.375661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.375689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.375812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.375839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.375972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.376011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.376135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.376191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.376343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.376369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.376464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.376490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.376612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.376638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.376766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.376794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.376916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.376943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.377058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.377084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.377238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.377263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.377376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.377401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.377549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.377574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.377686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.377714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.377866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.377915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.378047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.378073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.378208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.378238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.378335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.378365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.378515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.378566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.378701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.378728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.378820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.378847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.378966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.378992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.379080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.379105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.379225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.537 [2024-07-14 14:10:04.379250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.537 qpair failed and we were unable to recover it. 00:34:26.537 [2024-07-14 14:10:04.379365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.379406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.379524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.379573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.379702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.379730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.379872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.379924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.380090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.380116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.380230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.380256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.380374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.380399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.380567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.380595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.380723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.380750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.380843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.380872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.381023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.381052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.381169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.381195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.381310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.381336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.381446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.381475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.381601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.381630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.381733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.381762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.381896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.381941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.382054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.382080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.382218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.382261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.382416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.382465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.382596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.382626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.382752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.382783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.382900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.382958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.383087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.383113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.383233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.383274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.383403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.383428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.383643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.383691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.383788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.538 [2024-07-14 14:10:04.383816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.538 qpair failed and we were unable to recover it. 00:34:26.538 [2024-07-14 14:10:04.383921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.383969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.384068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.384093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.384206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.384231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.384338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.384365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.384486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.384514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.384611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.384639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.384790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.384817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.384946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.384985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.385129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.385158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.385263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.385318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.385449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.385478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.385612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.385641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.385765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.385793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.385958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.385984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.386078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.386104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.386245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.386270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.386433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.386482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.386605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.386633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.386766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.386794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.386907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.386949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.387042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.387067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.387184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.387212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.387381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.387437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.387630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.387679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.387843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.387871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.387989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.388014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.388156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.388181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.388295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.388344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.539 qpair failed and we were unable to recover it. 00:34:26.539 [2024-07-14 14:10:04.388464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.539 [2024-07-14 14:10:04.388513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.388642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.388670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.388794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.388823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.388957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.388982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.389103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.389128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.389222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.389247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.389433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.389480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.389621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.389671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.389796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.389824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.389969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.389994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.390138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.390163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.390277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.390319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.390477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.390505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.390728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.390756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.390886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.390930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.391049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.391074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.391171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.391196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.391288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.391313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.391447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.391475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.391631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.391659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.391782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.391810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.391907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.391950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.392041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.392066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.392197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.392221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.392332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.392360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.392478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.392506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.392603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.392635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.392774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.392801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.540 [2024-07-14 14:10:04.392971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.540 [2024-07-14 14:10:04.393010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.540 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.393143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.393181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.393286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.393329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.393456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.393486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.393624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.393653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.393778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.393806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.393957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.393983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.394092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.394130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.394265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.394311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.394444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.394487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.394592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.394620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.394750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.394776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.394924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.394950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.395042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.395068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.395186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.395212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.395308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.395334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.395424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.395449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.395559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.395585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.395677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.395704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.395824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.395849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.395947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.395973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.541 [2024-07-14 14:10:04.396061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.541 [2024-07-14 14:10:04.396086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.541 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.396178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.396203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.396288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.396313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.396432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.396459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.396578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.396609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.396707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.396735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.396869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.396902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.397015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.397039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.397198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.397225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.397336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.397377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.397528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.397556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.397645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.397673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.397808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.397833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.397930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.397957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.398102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.398141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.398273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.398315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.398507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.398556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.398713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.398742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.398887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.398930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.399028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.399054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.399190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.399219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.399318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.399347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.399440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.399468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.399567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.399596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.399761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.399818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.399969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.400007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.400147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.400192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.400365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.542 [2024-07-14 14:10:04.400413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.542 qpair failed and we were unable to recover it. 00:34:26.542 [2024-07-14 14:10:04.400538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.400586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.400719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.400747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.400885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.400915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.401028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.401053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.401200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.401225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.401337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.401365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.401491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.401533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.401659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.401686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.401815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.401846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.401970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.401998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.402081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.402108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.402190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.402232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.402355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.402384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.402503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.402552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.402679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.402708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.402831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.402870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.403001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.403033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.403154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.403180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.403305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.403333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.403498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.403527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.403685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.403732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.403871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.403907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.404041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.404087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.404198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.404224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.404315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.404341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.404425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.404451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.404570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.404598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.404718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.404746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.543 [2024-07-14 14:10:04.404861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.543 [2024-07-14 14:10:04.404893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.543 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.404983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.405010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.405132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.405158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.405246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.405272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.405387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.405413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.405529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.405555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.405669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.405695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.405779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.405806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.405940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.405978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.406115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.406145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.406269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.406297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.406436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.406464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.406626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.406677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.406774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.406804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.406919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.406947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.407108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.407175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.407347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.407375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.407559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.407608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.407697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.407722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.407863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.407894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.408063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.408091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.408241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.408269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.408414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.408466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.408590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.408640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.408843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.408871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.409008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.409033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.409154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.409194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.409346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.409374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.544 qpair failed and we were unable to recover it. 00:34:26.544 [2024-07-14 14:10:04.409486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.544 [2024-07-14 14:10:04.409511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.409618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.409646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.409813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.409856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.409990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.410029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.410187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.410231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.410343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.410373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.410525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.410579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.410684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.410714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.410853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.410904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.411047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.411076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.411198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.411226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.411390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.411439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.411560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.411613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.411730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.411756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.411900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.411939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.412084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.412114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.412271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.412299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.412428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.412455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.412556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.412584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.412678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.412706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.412827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.412851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.412948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.412974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.413090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.413114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.413208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.413236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.413358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.413385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.413513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.413540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.413638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.413665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.413759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.545 [2024-07-14 14:10:04.413786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.545 qpair failed and we were unable to recover it. 00:34:26.545 [2024-07-14 14:10:04.413883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.413926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.414020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.414046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.414163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.414187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.414320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.414348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.414538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.414566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.414693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.414720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.414840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.414868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.415017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.415043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.415127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.415152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.415311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.415336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.415477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.415505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.415626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.415654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.415767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.415792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.415910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.415940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.416061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.416086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.416189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.416217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.416355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.416398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.416524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.416551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.416700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.416727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.416858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.416889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.416982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.417006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.417092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.417117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.417260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.417285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.417417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.417474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.417639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.417670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.417777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.417804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.417920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.417947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.418068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.546 [2024-07-14 14:10:04.418094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.546 qpair failed and we were unable to recover it. 00:34:26.546 [2024-07-14 14:10:04.418214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.418240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.418354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.418380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.418552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.418580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.418737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.418765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.418902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.418927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.419019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.419045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.419163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.419188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.419279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.419304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.419409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.419437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.419563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.419605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.419733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.419760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.419892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.419934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.420018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.420048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.420164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.420189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.420377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.420425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.420559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.420602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.420707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.420738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.420863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.420899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.421036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.421062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.421203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.421247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.421372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.421400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.421556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.421584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.421709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.421738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.421862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.421896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.422008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.422033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.422151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.422176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.422318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.422345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.422469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.547 [2024-07-14 14:10:04.422498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.547 qpair failed and we were unable to recover it. 00:34:26.547 [2024-07-14 14:10:04.422635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.422665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.422800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.422829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.422987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.423026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.423153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.423181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.423344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.423388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.423521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.423564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.423750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.423801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.423905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.423948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.424038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.424065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.424282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.424310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.424467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.424496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.424599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.424628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.424725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.424753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1612845 Killed "${NVMF_APP[@]}" "$@" 00:34:26.548 [2024-07-14 14:10:04.424851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.424888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.425050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.425075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.425161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.425192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:26.548 [2024-07-14 14:10:04.425311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.425336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:26.548 [2024-07-14 14:10:04.425473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:26.548 [2024-07-14 14:10:04.425502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:26.548 [2024-07-14 14:10:04.425659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.425688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.425793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.425821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.425959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.425985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.426101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.426127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.426287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.426329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.426462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.426490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.426619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.426647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.426743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.426771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.426908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.426957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.427068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.548 [2024-07-14 14:10:04.427093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.548 qpair failed and we were unable to recover it. 00:34:26.548 [2024-07-14 14:10:04.427194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.427219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.427360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.427385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.427552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.427580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.427697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.427725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.427814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.427842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.427963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.427988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.428072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.428097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.428252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.428279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.428410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.428438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.428596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.428624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.428748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.428776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1613402 00:34:26.549 [2024-07-14 14:10:04.428905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.428936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1613402 00:34:26.549 [2024-07-14 14:10:04.429039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.429065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.429172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.429197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1613402 ']' 00:34:26.549 [2024-07-14 14:10:04.429324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.429352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:26.549 [2024-07-14 14:10:04.429480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.429510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.549 [2024-07-14 14:10:04.429642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.429671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 wit 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:26.549 h addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.549 [2024-07-14 14:10:04.429826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.429871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.430007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.430034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.430150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.430176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.430288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.430318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.430479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.430511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.430674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.430701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.430816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.430843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.430953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.430981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.431103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.431129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.431230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.431256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.431404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.431430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.431544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.431572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.431715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.431742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.431862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.431907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.432044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.432070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.432219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.432244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.432373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.432402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.432499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.432527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.432628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.432656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.549 qpair failed and we were unable to recover it. 00:34:26.549 [2024-07-14 14:10:04.432789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.549 [2024-07-14 14:10:04.432815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.432942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.432966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.433093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.433136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.433243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.433272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.433372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.433399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.433516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.433545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.433677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.433705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.433827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.433855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.433982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.434007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.434096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.434122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.434235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.434263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.434376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.434416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.434523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.434556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.434685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.434715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.434866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.434911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.435009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.435037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.435130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.435156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.435301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.435346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.435481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.435525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.435662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.435705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.435815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.435841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.435977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.436007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.436134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.436162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.436300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.436328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.436451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.436479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.436573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.436602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.436706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.436735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.436844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.436872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.437002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.437028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.437108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.437133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.437255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.437280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.437363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.437388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.437510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.437536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.437628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.437656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.437743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.437773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.437860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.437892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.438034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.438059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.438176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.438202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.438333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.438358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.438439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.438465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.438557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-07-14 14:10:04.438583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-07-14 14:10:04.438722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.438748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.438841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.438866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.438965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.438990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.439087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.439112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.439248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.439276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.439404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.439433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.439562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.439590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.439718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.439750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.439872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.439916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.440031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.440059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.440157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.440206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.440370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.440399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.440503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.440535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.440679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.440729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.440837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.440864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.440968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.440994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.441080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.441107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.441243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.441287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.441423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.441468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.441593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.441621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.441723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.441754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.441882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.441909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.442002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.442027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.442115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.442140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.442226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.442251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.442355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.442404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.442550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.442578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.442798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.442843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.442992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.443020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.443146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.443191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.443300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.443344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.443481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.443524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.443621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.443647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.443768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.443795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.443904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.443933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.444037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.444065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.444196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.444222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.444339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.444365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.444467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.444494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.444610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.444636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.444727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-07-14 14:10:04.444753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-07-14 14:10:04.444853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.444886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.444987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.445012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.445098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.445123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.445244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.445272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.445419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.445449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.445555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.445583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.445676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.445709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.445848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.445875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.445975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.446000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.446083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.446112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.446243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.446271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.446404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.446432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.446547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.446572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.446703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.446728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.446829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.446867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.446978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.447005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.447120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.447145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.447277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.447317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.447469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.447526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.447709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.447749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.447901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.447928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.448046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.448072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.448161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.448207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.448330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.448367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.448524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.448552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.448658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.448686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.448790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.448820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.448986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.449025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.449123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.449150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.449287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.449328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.449453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.449481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.449575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.449603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.449695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.449721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.449854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.449894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.450012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-07-14 14:10:04.450042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-07-14 14:10:04.450146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-07-14 14:10:04.450189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-07-14 14:10:04.450314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-07-14 14:10:04.450343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-07-14 14:10:04.450455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-07-14 14:10:04.450481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.450593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.450636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.450745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.450774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.450892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.450917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.450999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.451024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.451108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.451150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.451272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.451300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.451420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.451448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.451536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.451564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.451703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.451730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.451824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.451852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.451976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.452001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.452129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.452156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.452285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.452313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.452433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.452460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.452592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.452636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.452742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.452769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.452859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.452897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.453003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.453033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.453151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.453195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.453283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.453309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.453396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.453423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.453544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.453570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.453682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.453721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-07-14 14:10:04.453821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-07-14 14:10:04.453849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.453957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.453986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.454076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.454103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.454236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.454265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.454374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.454418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.454530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.454559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.454670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.454698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.454814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.454842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.454947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.454974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.455063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.455089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.455218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.455247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.455372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.455401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.455504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.455538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.455651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.455696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.455836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.455865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.455992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.456018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.456107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.456136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.456255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.456282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.456442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.456496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.456586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.456615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.456771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.456809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.456905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.456934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.457022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.457048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.457174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.457203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.457314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.457340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.457513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.457564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.457697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.457749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.457860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.457890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.457975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.458000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.458085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.458114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.458218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.458246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.458336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.458364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.458460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.458488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.458579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.458607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.458740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.458770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.458914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.458968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.459132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.459171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.459311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.459341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.459469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.459513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.459626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.459657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.459784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.459813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.459960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.459999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.460142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.460169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.460259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.460285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.460371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.460397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.460483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.460511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.460643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.460681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.460773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.460801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.460896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.460923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.461015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.461042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.461156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.461181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.461326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.461351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.461456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.461489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.461595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.461623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.461721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.461750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.461855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.461885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.461994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.462019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.462100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.462124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.462248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.462273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.462416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.462443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.462528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-07-14 14:10:04.462556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-07-14 14:10:04.462674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.462704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.462798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.462836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.462942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.462971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.463070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.463096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.463188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.463214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.463355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.463384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.463518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.463563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.463716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.463745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.463856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.463906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.464057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.464095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.464237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.464267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.464366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.464394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.464492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.464520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.464620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.464648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.464770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.464800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.464936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.464974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.465099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.465125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.465214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.465240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.465405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.465439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.465589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.465617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.465714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.465742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.465870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.465906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.466009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.466034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.466155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.466180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.466320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.466347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.466500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.466528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.466655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.466687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.466818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.466849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.466969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.466995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.467111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.467137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.467242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.467282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.467458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.467494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.467615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.467644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.467753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.467781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.467915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.467954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.468102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.468129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.468235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.468272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.468426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.468473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.468591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.468617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.468743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.468782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.468885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.468912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.469003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.469028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.469117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.469143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.469253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.469278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.469365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.469393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.469533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.469579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.469724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.469758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.469859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.469895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.470029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.470055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.470166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.470195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.470291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.470320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.470425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.470455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.470641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.470686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.470793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.470832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.470972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.470999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.471085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.471110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.471219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.471269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.471430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.471480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.471627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.471679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.471800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.471828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.471953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.471982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.472104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.472130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.472221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.472248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.472355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.472384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.472483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.472514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.472614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-07-14 14:10:04.472643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-07-14 14:10:04.472775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.472803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.472952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.472978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.473066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.473091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.473212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.473240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.473375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.473446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.473522] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:26.865 [2024-07-14 14:10:04.473573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.473622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b9[2024-07-14 14:10:04.473624] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:0 with addr=10.0.0.2, port=4420 00:34:26.865 5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.473791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.473819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.473926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.473952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.474085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.474111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.474251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.474280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.474388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.474431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.474530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.474559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.474655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.474684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.474785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.474810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.477998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.478038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.478164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.478194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.478307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.478333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.478427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.478454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.478624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.478693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.478814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.478839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.478936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.478962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.479053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.479079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.479186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.479225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.479348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.479376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.479538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.479582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.479687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.479717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.479853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.479893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.480033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.480060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.480220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.480249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.480393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.480420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.480583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.480620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.480728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.480758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.480893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.480938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.481022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.481049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.481174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.481202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.481340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.481369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.481480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.481523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.481623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.481653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.481778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.481806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.481924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.481967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.482086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.482113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.482287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.482316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.482409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.482438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.482535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.482563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.482836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.482874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.483063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.483090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.483204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.483243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.483357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.483382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.483497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-07-14 14:10:04.483528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-07-14 14:10:04.483635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.483663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.483787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.483815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.483943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.483975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.484100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.484127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.484292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.484318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.484433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.484459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.484603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.484633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.484749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.484775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.484928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.484954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.485075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.485105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.485235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.485261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.485394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.485441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.485595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.485623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.485781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.485809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.485964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.485990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.486088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.486114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.486212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.486237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.486347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.486372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.486503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.486532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.486644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.486686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.486815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.486847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.487008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.487047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.487178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.487205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.487302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.487347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.487446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.487474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.487595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.487623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.487739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.487767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.487864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.487903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.488052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.488091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.488218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.488249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.488384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.488412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.488510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.488540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.488670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.488698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.488819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.488848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.489000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.489027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.489117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.489142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.489238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.489264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.489366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.489395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.489493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.489518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.489625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.489655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.489759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.489802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.489945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.489984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.490082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.490109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.490195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.490221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.490344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.490372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.490484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.490525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.490645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.490670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.490781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.490808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.490923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.490949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.491062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.491087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.491226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.491254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.491356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.491381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.491494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.491522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.491646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.491674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.491796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.491835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.491955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.491987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.492091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.492119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.492227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.492256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.492398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.492424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.492548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-07-14 14:10:04.492576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-07-14 14:10:04.492697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.492726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.492833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.492858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.493018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.493058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.493193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.493245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.493377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.493403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.493491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.493527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.493658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.493698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.493842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.493871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.493989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.494015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.494103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.494130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.494264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.494292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.494387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.494412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.494510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.494538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.494656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.494697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.494803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.494831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.494962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.495001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.495101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.495134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.495237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.495264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.495421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.495464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.495595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.495625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.495743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.495773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.495911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.495955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.496049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.496074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.496189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.496227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.496360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.496389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.496511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.496536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.496677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.496726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.496824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.496852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.496981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.497007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.497091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.497117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.497280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.497308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.497459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.497487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.497585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.497613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.497721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.497765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.497947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.497986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.498109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.498137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.498231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.498257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.498400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.498452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.498586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.498637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.498759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.498788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.498906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.498951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.499046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.499072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.499157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.499189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.499300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.499329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.499438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.499463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.499580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.499609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.499741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.499784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.499945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.499984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.500116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.500154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.500245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.500271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.500448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.500496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.500604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.500629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.500798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.500823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.500922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.500947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.501035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.501060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.501191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.501216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.501351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.501400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.501596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.501647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.501749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.501777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.501888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.501936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.502049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.502074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.502209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.502247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-07-14 14:10:04.502346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-07-14 14:10:04.502373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.502496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.502523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.502624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.502652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.502785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.502829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.502969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.503008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.503111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.503139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.503235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.503261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.503353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.503396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.503517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.503560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.503662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.503690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.503823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.503851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.503987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.504027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.504140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.504170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.504319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.504348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.504538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.504591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.504706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.504731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.504824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.504855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.504957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.504984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.505073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.505114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.505276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.505304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.505454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.505508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.505602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.505633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.505758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.505787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.505900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.505939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.506045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.506072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.506159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.506192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.506292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.506321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.506478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.506527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.506714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.506765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.506869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.506903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.506988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.507013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.507124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.507170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.507301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.507344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.507443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.507471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.507597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.507628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.507784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.507822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.507923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.507951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.508065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.508110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.508246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.508292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.508412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.508439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.508526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.508551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.508648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.508675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.508762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.508788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.508946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.508974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.509070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.509095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.509181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.509208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.509290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.509316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.509421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.509446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.509532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.509562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.509655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.509681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.509827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.509855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.509987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.510031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.510189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.510219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.510327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.510353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-07-14 14:10:04.510461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-07-14 14:10:04.510504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.510620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.510646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.510734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.510759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.510872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.510904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.510988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.511014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.511106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.511132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.511252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.511277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.511358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.511383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.511510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.511537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.511628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.511655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.511738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.511763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.511907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.511932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.512018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.512043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.512155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.512180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.512305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.512349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 EAL: No free 2048 kB hugepages reported on node 1 00:34:26.869 [2024-07-14 14:10:04.512461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.512505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.512621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.512651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.512790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.512817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.512912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.512941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.513050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.513079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.513176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.513206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.513356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.513400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.513518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.513545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.513658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.513688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.513808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.513836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.513959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.513985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.514118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.514146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.514235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.514263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.514383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.514431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.514568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.514597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.514706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.514734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.514826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.514853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.514980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.515006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.515109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.515137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.515262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.515289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.515394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.515422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.515550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.515592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.515679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.515705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.515792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.515817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.515916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.515944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.516061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.516087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.516176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.516201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.516319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.516345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.516436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.516462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.516553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.516582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.516671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.516697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.516787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.516813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.516906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.516933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.517051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.517079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.517213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.517252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.517368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.517395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.517482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.517508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.517604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.517630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.517720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.517747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.517837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.517863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.517967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.517994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.518086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.518113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.518228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.518253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.518343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.518368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.518463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.518492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.518606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.518633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.518725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-07-14 14:10:04.518755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-07-14 14:10:04.518852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.518885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.518976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.519002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.519088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.519114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.519233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.519259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.519347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.519373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.519467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.519493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.519579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.519606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.519722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.519748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.519835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.519862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.519966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.519992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.520081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.520109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.520213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.520239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.520331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.520357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.520456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.520482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.520574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.520601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.520711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.520750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.520844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.520871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.521002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.521029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.521162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.521187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.521282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.521308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.521397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.521423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.521549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.521575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.521681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.521720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.521851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.521883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.521974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.521999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.522095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.522121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.522274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.522300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.522394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.522419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.522534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.522562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.522660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.522686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.522813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.522851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.522963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.522990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.523106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.523132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.523215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.523241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.523322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.523347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.523462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.523488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.523575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.523601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.523735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.523773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.523898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.523937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.524041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.524080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.524218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.524245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.524364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.524391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.524514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.524542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.524659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.524687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.524785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.524823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.524972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.525012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.525116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.525143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.525229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.525255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.525367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.525393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.525481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.525508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.525626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.525653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.525744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.525772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.525862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.525895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.526025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.526051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.526168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.526193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.526279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.526304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.526394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.526421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.526524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.526549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.526644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.526670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.526762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.526788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.526910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.526938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.527033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-07-14 14:10:04.527060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-07-14 14:10:04.527188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.527213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.527297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.527322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.527409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.527435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.527541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.527580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.527725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.527756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.527849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.527883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.528000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.528026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.528115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.528141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.528227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.528255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.528366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.528391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.528484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.528513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.528604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.528630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.528745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.528772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.528856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.528888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.528981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.529006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.529121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.529146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.529287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.529312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.529403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.529432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.529530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.529558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.529676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.529703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.529790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.529816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.529912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.529938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.530017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.530042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.530151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.530176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.530251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.530276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.530367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.530395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.530478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.530505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.530611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.530650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.530774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.530800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.530895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.530923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.531007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.531032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.531122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.531152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.531245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.531271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.531360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.531388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.531506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.531532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.531617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.531644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.531732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.531758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.531851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.531883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.531971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.531997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.532076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.532101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.532211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.532236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.532347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.532374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.532493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.532520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.532619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.532645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.532755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.532780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.532908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.532935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.533020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.533046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.533159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.533185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.533269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.533294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.533384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.533411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.533503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.533530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.533631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.533671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-07-14 14:10:04.533793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-07-14 14:10:04.533822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.533924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.533950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.534037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.534063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.534150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.534176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.534265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.534291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.534392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.534418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.534513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.534540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.534624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.534650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.534742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.534769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.534855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.534887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.535005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.535032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.535144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.535170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.535262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.535290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.535385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.535411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.535532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.535558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.535681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.535709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.535822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.535849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.535931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.535956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.536050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.536078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.536170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.536201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.536291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.536319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.536458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.536484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.536569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.536595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.536691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.536718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.536830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.536856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.536976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.537014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.537130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.537157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.537254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.537281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.537396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.537422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.537521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.537559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.537686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.537713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.537804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.537832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.537932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.537959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.538087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.538113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.538195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.538221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.538304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.538329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.538458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.538497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.538592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.538618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.538708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.538734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.538850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.538887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.538979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.539006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.539099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.539124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.539211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.539238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.539392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.539417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.539535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.539560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.539678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.539703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.539808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.539846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.539958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.539985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.540097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.540125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.540214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.540240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.540323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.540350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.540472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.540498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.540616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.540643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.540773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.540812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.540923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.540962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.541052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.541078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.541166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.541192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.541283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.541310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.541453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.541479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.541625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.541657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.541784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.541809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.541924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.541963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.542082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.542109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.542198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.542223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.542334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-07-14 14:10:04.542359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-07-14 14:10:04.542471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.542497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.542585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.542610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.542718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.542743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.542859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.542889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.542972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.542997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.543105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.543130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.543271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.543296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.543376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.543400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.543517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.543542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.543681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.543705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.543812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.543837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.543934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.543973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.544060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.544087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.544212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.544237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.544313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.544338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.544454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.544479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.544589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.544614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.544714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.544740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.544833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.544858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.544956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.544981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.545092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.545117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.545229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.545262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.545379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.545405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.545498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.545524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.545639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.545666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.545780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.545806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.545929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.545957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.546047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.546073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.546159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.546184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.546299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.546323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.546408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.546433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.546526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.546564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.546654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.546681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.546782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.546826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.546957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.546984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.547099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.547125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.547213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.547238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.547318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.547343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.547427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.547452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.547474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:26.873 [2024-07-14 14:10:04.547550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.547578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.547684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.547723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.547824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.547851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.547956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.547983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.548068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.548093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.548176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.548201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.548289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.548314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.548425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.548453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.548540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.548567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.548671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.548697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.548814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.548839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.548933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.548958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.549048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.549073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.549164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.549191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.549311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.549338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.549456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.549481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.549593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.549618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.549734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.549774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.549895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.873 [2024-07-14 14:10:04.549923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-07-14 14:10:04.550016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.550043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.550162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.550188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.550282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.550307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.550395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.550424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.550544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.550569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.550707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.550732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.550817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.550842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.550936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.550964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.551057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.551082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.551205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.551229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.551311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.551336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.551424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.551449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.551537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.551562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.551646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.551671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.551754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.551779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.551900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.551928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.552074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.552100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.552192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.552218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.552340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.552366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.552481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.552507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.552625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.552655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.552749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.552775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.552905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.552931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.553025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.553050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.553166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.553190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.553283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.553308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.553417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.553442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.553554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.553579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.553666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.553693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.553783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.553810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.553908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.553942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.554078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.554118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.554241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.554269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.554387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.554413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.554490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.554515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.554630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.554656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.554757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.554803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.554927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.554954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.555071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.555097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.555216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.555243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.555337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.555363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.555473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.555499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.555591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.555619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.555729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.555767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.555921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.555967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.556062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.556090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.556201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.556226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.556340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.556365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.556455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.556480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.556583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.556610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.556738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.556777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.556889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.556927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.557030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.557056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.557143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.557169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.874 [2024-07-14 14:10:04.557264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.874 [2024-07-14 14:10:04.557289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.874 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.557386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.557415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.557506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.557533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.557659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.557685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.557804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.557829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.557953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.557979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.558066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.558091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.558207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.558232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.558350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.558378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.558521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.558547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.558640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.558667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.558789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.558814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.558958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.558984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.559077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.559102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.559193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.559217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.559327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.559352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.559455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.559491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.559582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.559609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.559724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.559751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.559866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.559899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.559989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.560016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.560131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.560157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.560263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.560289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.560405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.560431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.560552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.560578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.560694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.560719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.560851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.560882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.561000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.561026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.561115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.561140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.561255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.561280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.561407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.561433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.561531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.561556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.561638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.561663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.561802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.561827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.561924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.561950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.562040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.562065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.562173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.562199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.562285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.562310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.562424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.562449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.562550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.562576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.562689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.562714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.562828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.562884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.562992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.563019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.563141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.563182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.563326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.563352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.563445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.563471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.563613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.563653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.563799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.563826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.563918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.563944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.564060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.564086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.564186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.564212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.564324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.564349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.564462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.564492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.564608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.564637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.564768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.564808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.564943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.564972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.565119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.565145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.565272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.565298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.565418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.565444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.565565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.565593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.565729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.875 [2024-07-14 14:10:04.565768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.875 qpair failed and we were unable to recover it. 00:34:26.875 [2024-07-14 14:10:04.565897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.565927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.566050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.566078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.566176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.566203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.566321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.566348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.566436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.566462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.566552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.566579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.566680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.566719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.566882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.566910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.567021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.567046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.567142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.567174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.567261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.567287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.567380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.567405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.567512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.567538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.567633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.567673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.567794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.567823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.567929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.567955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.568080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.568105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.568246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.568272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.568385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.568411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.568525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.568551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.568668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.568697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.568819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.568846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.568972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.568999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.569128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.569154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.569245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.569270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.569385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.569410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.569494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.569521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.569645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.569686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.569815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.569843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.569947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.569974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.570098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.570124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.570248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.570274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.570367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.570394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.570485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.570510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.570627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.570653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.570797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.570823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.570946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.570973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.571101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.571141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.571277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.571304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.571427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.571453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.571573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.571599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.571716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.571741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.571852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.571885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.571986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.572014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.572102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.572127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.572240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.572265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.572359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.572385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.572509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.572538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.572659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.572686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.572783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.572819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.572942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.572969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.573053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.573079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.573168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.573196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.573292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.573319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.573421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.573459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.573552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.573579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.876 qpair failed and we were unable to recover it. 00:34:26.876 [2024-07-14 14:10:04.573729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.876 [2024-07-14 14:10:04.573755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.573840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.573864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.573961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.573986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.574102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.574127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.574213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.574238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.574348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.574374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.574518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.574546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.574707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.574746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.574857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.574906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.575062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.575089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.575212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.575239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.575329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.575355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.575475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.575502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.575622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.575647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.575735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.575760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.575869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.575900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.575990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.576016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.576112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.576137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.576267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.576292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.576378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.576404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.576494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.576520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.576638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.576663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.576805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.576830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.576955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.576983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.577100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.577125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.577241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.577267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.577345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.577370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.577474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.577512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.577649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.577689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.577826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.577865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.578007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.578035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.578146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.578171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.578292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.578318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.578410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.578437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.578586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.578616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.578764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.578791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.578888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.578916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.579012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.579039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.579134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.579159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.579275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.579300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.579413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.579440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.579525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.579551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.579668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.579696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.579787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.579813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.579933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.579972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.580099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.580126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.580213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.580240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.580367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.580394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.580508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.580535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.580626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.580651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.580775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.580803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.580914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.580941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.581039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.581066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.581145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.581170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.581284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.581311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.877 [2024-07-14 14:10:04.581394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.877 [2024-07-14 14:10:04.581419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.877 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.581564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.581591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.581681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.581706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.581829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.581858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.581983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.582009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.582125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.582155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.582268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.582294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.582383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.582408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.582504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.582531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.582653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.582692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.582823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.582851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.582977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.583006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.583099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.583126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.583237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.583264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.583405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.583431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.583554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.583580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.583680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.583720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.583840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.583867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.583999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.584026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.584115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.584141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.584251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.584277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.584393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.584419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.584535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.584560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.584672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.584698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.584792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.584819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.584953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.584980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.585071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.585099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.585191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.585217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.585307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.585334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.585479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.585506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.585620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.585648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.585798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.585824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.585950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.585990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.586117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.586145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.586310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.586336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.586422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.586448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.586563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.586589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.586703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.586729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.586822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.586848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.586988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.587017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.587135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.587173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.587296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.587322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.587437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.587463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.587552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.587577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.587688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.587713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.587829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.587859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.587953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.587978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.588100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.588127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.588243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.588268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.588361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.588386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.588485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.588514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.588633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.588660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.588765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.588803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.588952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.588979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.589092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.589118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.589212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.589236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.589359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.589384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.589498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.589525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.589616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.589644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.589737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.589765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.589863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.589914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.590015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.590041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.590134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.590159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.878 [2024-07-14 14:10:04.590275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.878 [2024-07-14 14:10:04.590301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.878 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.590392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.590418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.590539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.590567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.590668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.590695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.590815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.590841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.590963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.590990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.591078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.591103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.591212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.591237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.591328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.591355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.591470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.591501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.591616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.591642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.591733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.591761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.591850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.591883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.592016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.592055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.592158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.592187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.592306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.592331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.592486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.592513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.592663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.592691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.592784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.592812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.592897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.592925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.593034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.593060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.593144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.593170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.593255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.593281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.593373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.593401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.593491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.593519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.593631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.593657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.593762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.593787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.593883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.593909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.594024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.594050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.594134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.594159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.594274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.594299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.594384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.594411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.594528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.594556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.594657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.594696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.594829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.594868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.594971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.594997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.595086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.595114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.595203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.595231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.595352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.595379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.595520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.595546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.595650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.595689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.595814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.595842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.595971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.595997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.596116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.596142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.596255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.596280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.596395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.596421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.596510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.596538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.596643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.596682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.596808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.596836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.596947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.596980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.597101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.597127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.597219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.597246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.597368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.879 [2024-07-14 14:10:04.597394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.879 qpair failed and we were unable to recover it. 00:34:26.879 [2024-07-14 14:10:04.597508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.597533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.597659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.597698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.597825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.597852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.597984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.598013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.598161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.598188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.598302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.598329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.598413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.598439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.598558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.598586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.598730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.598757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.598885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.598913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.599013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.599038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.599131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.599160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.599279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.599304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.599422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.599448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.599567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.599593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.599680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.599706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.599789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.599815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.599917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.599946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.600072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.600099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.600194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.600220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.600336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.600362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.600491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.600530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.600630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.600659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.600754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.600780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.600881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.600908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.600996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.601023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.601144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.601169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.601261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.601287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.601381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.601407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.601491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.601516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.601598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.601623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.601753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.601792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.601929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.601969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.602068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.602108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.602203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.602230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.602344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.602369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.602512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.602543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.602633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.602659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.602758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.602787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.602898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.602925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.603042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.603069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.603150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.603175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.603262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.603288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.603403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.603429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.603521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.603547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.603629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.603654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.603739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.603764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.603873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.603906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.604015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.604054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.604192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.604231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.604357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.604384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.604481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.604507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.604601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.604627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.604744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.604771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.604861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.604893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.604988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.605014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.605132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.605157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.605266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.605291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.605392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.605417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.605559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.605585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.605666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.605691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.605777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.605802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.605927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.880 [2024-07-14 14:10:04.605957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.880 qpair failed and we were unable to recover it. 00:34:26.880 [2024-07-14 14:10:04.606092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.606136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.606239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.606266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.606393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.606418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.606513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.606539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.606620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.606645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.606725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.606752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.606912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.606939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.607031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.607057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.607146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.607172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.607317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.607343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.607462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.607488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.607581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.607609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.607703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.607730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.607833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.607873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.608013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.608040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.608164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.608191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.608306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.608332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.608419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.608445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.608535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.608562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.608719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.608757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.608887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.608916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.609007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.609034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.609150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.609176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.609267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.609294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.609417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.609444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.609550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.609578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.609700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.609728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.609818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.609850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.609972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.609997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.610125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.610151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.610233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.610259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.610356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.610381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.610471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.610498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.610618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.610644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.610760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.610786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.610868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.610900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.610989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.611014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.611135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.611161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.611242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.611268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.611393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.611419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.611510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.611535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.611626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.611651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.611771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.611796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.611935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.611975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.612100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.612128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.612221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.612248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.612391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.612417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.612508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.612536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.612637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.612676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.612835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.612862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.612987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.613013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.613128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.613153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.613240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.613265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.613385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.613410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.613502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.613529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.613648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.613673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.613793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.613822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.613943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.613971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.614053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.614080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.614174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.614202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.614319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.614345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.614437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.881 [2024-07-14 14:10:04.614464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.881 qpair failed and we were unable to recover it. 00:34:26.881 [2024-07-14 14:10:04.614588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.614615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.614702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.614728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.614823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.614853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.614961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.614989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.615080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.615106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.615227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.615258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.615380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.615406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.615490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.615516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.615636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.615664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.615768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.615808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.615920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.615959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.616078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.616104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.616197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.616224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.616340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.616367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.616452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.616479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.616572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.616598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.616692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.616731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.616854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.616887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.616984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.617011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.617136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.617162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.617248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.617275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.617421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.617447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.617541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.617568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.617696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.617734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.617902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.617941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.618067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.618096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.618218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.618245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.618339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.618366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.618458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.618485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.618579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.618608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.618710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.618749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.618871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.618905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.619030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.619062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.619153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.619178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.619294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.619319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.619439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.619467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.619565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.619592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.619713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.619740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.619887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.619914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.620034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.620061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.620180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.620205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.620320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.620347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.620436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.620464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.620579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.620605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.620690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.620717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.620829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.620855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.620957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.620984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.621095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.621121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.621227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.621253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.621370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.621397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.621492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.621518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.621647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.621685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.621777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.621804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.621922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.621949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.622064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.622090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.622208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.882 [2024-07-14 14:10:04.622233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.882 qpair failed and we were unable to recover it. 00:34:26.882 [2024-07-14 14:10:04.622349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.622375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.622488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.622513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.622599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.622624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.622753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.622791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.622893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.622921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.623016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.623042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.623158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.623184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.623277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.623304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.623404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.623431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.623543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.623570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.623688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.623713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.623797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.623822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.623940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.623966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.624050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.624075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.624165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.624191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.624289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.624316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.624435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.624461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.624584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.624611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.624694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.624721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.624814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.624840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.624936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.624963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.625053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.625081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.625206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.625235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.625349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.625375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.625466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.625492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.625611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.625637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.625725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.625750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.625859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.625890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.625982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.626008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.626100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.626125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.626244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.626269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.626408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.626433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.626518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.626544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.626633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.626659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.626758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.626796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.626930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.626958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.627051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.627076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.627171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.627197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.627288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.627313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.627421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.627447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.627558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.627585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.627689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.627728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.627823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.627852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.627951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.627983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.628101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.628127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.628217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.628243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.628326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.628351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.628440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.628468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.628588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.628618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.628713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.628740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.628833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.628859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.628983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.629009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.629134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.629162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.629257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.629283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.629397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.629424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.629511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.629537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.629656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.629683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.629832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.629860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.629964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.629991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.630093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.630132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.630255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.630283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.630403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.630430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.630548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.630574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.883 qpair failed and we were unable to recover it. 00:34:26.883 [2024-07-14 14:10:04.630662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.883 [2024-07-14 14:10:04.630687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.630792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.630831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.630960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.630988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.631107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.631134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.631252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.631278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.631372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.631398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.631513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.631539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.631700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.631740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.631850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.631899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.632032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.632060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.632173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.632199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.632339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.632366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.632472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.632511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.632603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.632630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.632719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.632749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.632866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.632900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.633016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.633042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.633133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.633159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.633249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.633275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.633396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.633424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.633557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.633588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.633682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.633707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.633793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.633819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.633903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.633930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.634013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.634042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.634129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.634155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.634244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.634271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.634413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.634439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.634553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.634579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.634731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.634769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.634864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.634895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.635031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.635057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.635169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.635194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.635307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.635332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.635429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.635455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.635571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.635596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.635725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.635764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.635857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.635891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.635978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.636004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.636090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.636116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.636205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.636230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.636316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.636342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.636424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.636449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.636537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.636564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.636652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.636677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.636766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.636791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.636886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.636926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.637048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.637082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.637179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.637205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.637301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.637327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.637410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.637436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.637523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.637549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.637637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.637664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.637756] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.884 [2024-07-14 14:10:04.637776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.637790] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.884 [2024-07-14 14:10:04.637800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.884 [2024-07-14 14:10:04.637805] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.637818] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.884 [2024-07-14 14:10:04.637828] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.884 [2024-07-14 14:10:04.637924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.637892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:26.884 [2024-07-14 14:10:04.637951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.637949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:26.884 [2024-07-14 14:10:04.637952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:26.884 [2024-07-14 14:10:04.638048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.638073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.884 [2024-07-14 14:10:04.637925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:26.884 [2024-07-14 14:10:04.638159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.884 [2024-07-14 14:10:04.638183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.884 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.638271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.638295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.638383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.638410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.638516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.638542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.638662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.638688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.638772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.638797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.638895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.638935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.639027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.639054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.639148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.639174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.639294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.639320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.639400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.639427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.639521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.639548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.639638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.639665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.639749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.639774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.639896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.639924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.640025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.640052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.640134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.640160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.640241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.640267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.640348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.640376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.640472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.640500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.640588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.640614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.640703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.640729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.640814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.640840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.640941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.640968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.641081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.641106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.641182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.641207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.641299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.641325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.641443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.641470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.641562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.641595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.641716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.641754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.641870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.641903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.642020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.642045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.642147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.642173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.642268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.642292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.642377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.642402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.642522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.642550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.642637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.642664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.642758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.642784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.642867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.642901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.643015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.643042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.643129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.643156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.643237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.643263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.643370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.643409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.643499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.643526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.643615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.643642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.643719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.643744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.643827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.643852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.643941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.643966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.644068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.644095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.644212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.644238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.644327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.644354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.644440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.644466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.644577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.885 [2024-07-14 14:10:04.644603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.885 qpair failed and we were unable to recover it. 00:34:26.885 [2024-07-14 14:10:04.644689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.644716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.644808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.644834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.644922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.644955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.645041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.645067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.645151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.645176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.645262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.645288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.645403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.645429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.645520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.645547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.645648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.645687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.645776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.645803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.645896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.645924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.646024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.646050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.646195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.646223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.646305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.646332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.646419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.646445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.646565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.646595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.646693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.646722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.646831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.646869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.646974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.647001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.647081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.647107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.647200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.647227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.647369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.647394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.647484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.647511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.647599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.647624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.647716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.647743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.647862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.647897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.647989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.648017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.648105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.648131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.648214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.648239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.648358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.648384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.648460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.648485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.648574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.648600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.648689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.648717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.648806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.648833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.648930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.648957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.649075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.649101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.649186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.649213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.649306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.649334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.649430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.649457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.649575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.649602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.649690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.649716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.649808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.649836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.649946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.649991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.650086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.650113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.650198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.650223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.650301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.650326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.650448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.650475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.650572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.650599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.650716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.650744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.650828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.650854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.650954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.650981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.651077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.651104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.651189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.651215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.651361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.651388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.651505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.651533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.651652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.651678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.651768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.651794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.651881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.651907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.651994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.652020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.652101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.652127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.886 [2024-07-14 14:10:04.652244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.886 [2024-07-14 14:10:04.652270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.886 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.652379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.652405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.652484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.652510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.652600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.652626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.652708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.652733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.652827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.652855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.652949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.652978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.653100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.653126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.653215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.653241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.653340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.653366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.653462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.653501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.653591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.653618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.653709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.653736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.653817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.653842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.653972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.653999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.654093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.654118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.654202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.654228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.654342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.654367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.654461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.654486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.654576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.654601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.654691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.654719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.654812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.654837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.654957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.654983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.655077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.655102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.655210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.655236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.655327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.655351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.655440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.655466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.655584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.655609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.655691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.655716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.655796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.655821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.655913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.655939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.656021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.656046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.656131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.656155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.656232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.656257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.656334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.656359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.656502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.656527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.656633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.656671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.656784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.656823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.656944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.656972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.657063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.657088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.657207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.657233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.657319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.657344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.657430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.657457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.657543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.657568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.657656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.657682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.657772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.657797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.657911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.657937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.658020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.658045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.658135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.658160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.658243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.658272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.658389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.658414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.658537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.658565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.658655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.658680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.658766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.658793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.658889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.658915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.659015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.659042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.659131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.659157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.659268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.659294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.659380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.659405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.659488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.659513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.659635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.659661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.659744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.659769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.887 [2024-07-14 14:10:04.659848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.887 [2024-07-14 14:10:04.659872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.887 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.659984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.660009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.660091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.660116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.660195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.660219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.660330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.660355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.660441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.660466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.660568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.660607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.660694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.660721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.660826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.660865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.660999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.661027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.661111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.661137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.661257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.661283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.661375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.661401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.661513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.661539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.661622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.661651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.661737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.661762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.661885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.661910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.661997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.662022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.662135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.662160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.662244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.662268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.662350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.662375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.662462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.662489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.662591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.662631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.662744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.662784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.662881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.662907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.662994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.663019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.663100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.663125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.663253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.663278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.663367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.663392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.663474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.663499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.663601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.663639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.663763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.663790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.663923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.663962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.664058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.664084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.664174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.664200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.664347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.664372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.664459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.664483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.664569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.664594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.664728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.664766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.664867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.664912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.665013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.665040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.665130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.665162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.665255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.665281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.665366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.665392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.665487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.665512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.665622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.665652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.665771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.665796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.665892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.665919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.666004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.666030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.666112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.666137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.666253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.666278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.666389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.666414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.666501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.888 [2024-07-14 14:10:04.666527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.888 qpair failed and we were unable to recover it. 00:34:26.888 [2024-07-14 14:10:04.666614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.666639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.666787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.666814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.666921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.666962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.667082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.667109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.667224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.667250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.667360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.667386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.667475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.667500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.667590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.667616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.667739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.667778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.667904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.667932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.668029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.668055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.668173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.668199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.668311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.668337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.668460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.668486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.668582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.668608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.668705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.668744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.668866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.668899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.668991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.669019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.669105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.669131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.669281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.669307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.669404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.669431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.669516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.669542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.669635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.669661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.669750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.669777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.669866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.669899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.670006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.670031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.670116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.670141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.670224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.670249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.670347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.670387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.670535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.670573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.670672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.670700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.670793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.670820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.670910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.670937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.671018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.671044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.671138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.671163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.671260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.671285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.671376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.671404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.671538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.671565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.671650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.671675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.671764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.671790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.671884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.671909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.672024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.672049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.672138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.672165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.672285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.672310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.672421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.672447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.672567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.672593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.672683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.672721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.672846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.672901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.672997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.673023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.673110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.673136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.673249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.673274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.673368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.673395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.673487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.673514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.673659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.673685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.673772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.673798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.673888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.673920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.674003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.674029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.674113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.674140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.674287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.674313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.674409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.674435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.674520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.674548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.674667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.889 [2024-07-14 14:10:04.674694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.889 qpair failed and we were unable to recover it. 00:34:26.889 [2024-07-14 14:10:04.674804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.674829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.674918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.674944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.675055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.675080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.675166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.675191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.675277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.675303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.675418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.675445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.675529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.675554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.675674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.675700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.675782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.675807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.675913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.675952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.676045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.676071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.676184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.676209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.676317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.676342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.676434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.676460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.676545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.676570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.676666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.676695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.676786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.676812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.676927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.676953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.677034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.677059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.677146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.677172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.677272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.677298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.677417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.677443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.677530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.677557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.677659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.677698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.677808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.677835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.677929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.677955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.678041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.678068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.678195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.678220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.678306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.678332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.678416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.678441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.678577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.678606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.678709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.678748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.678844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.678872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.678979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.679005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.679103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.679128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.679213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.679239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.679327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.679353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.679437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.679462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.679562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.679600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.679720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.679747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.679845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.679890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.679986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.680012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.680099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.680125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.680213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.680239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.680354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.680380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.680463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.680489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.680583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.680611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.680706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.680732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.680819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.680844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.680974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.681009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.681105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.681130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.681220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.681249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.681372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.681398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.681484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.681511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.681623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.681648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.681737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.681765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.681863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.681909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.682008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.682035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.682123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.890 [2024-07-14 14:10:04.682149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.890 qpair failed and we were unable to recover it. 00:34:26.890 [2024-07-14 14:10:04.682234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.682260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.682353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.682385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.682474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.682501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.682611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.682638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.682723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.682751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.682845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.682870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.682965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.682990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.683079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.683105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.683219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.683244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.683332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.683358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.683444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.683470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.683566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.683594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.683690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.683716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.683807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.683833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.683929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.683957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.684049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.684077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.684169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.684196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.684316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.684342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.684434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.684460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.684564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.684591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.684702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.684729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.684817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.684844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.684940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.684966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.685051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.685076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.685163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.685189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.685276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.685302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.685384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.685409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.685506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.685532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.685626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.685655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.685748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.685776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.685862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.685897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.685987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.686012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.686103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.686128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.686219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.686244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.686330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.686355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.686469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.686500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.686593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.686619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.686705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.686732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.686846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.686872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.686978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.687003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.687084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.687109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.687219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.687245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.687329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.687355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.687463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.687488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.687583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.687609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.687699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.891 [2024-07-14 14:10:04.687724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.891 qpair failed and we were unable to recover it. 00:34:26.891 [2024-07-14 14:10:04.687806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.687831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.687919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.687945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.688029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.688054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.688141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.688166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.688253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.688279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.688372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.688410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.688501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.688528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.688622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.688650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.688737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.688763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.688859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.688892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.688993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.689020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.689110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.689139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.689224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.689249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.689345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.689372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.689466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.689493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.689583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.689608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.689722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.689748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.689834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.689859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.689949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.689975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.690070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.690096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.690183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.690209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.690351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.690377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.690468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.690498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.690581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.690608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.690708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.690747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.690869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.690902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.690987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.691014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.691106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.691133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.691222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.691248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.691340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.691368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.691449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.691475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.691569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.691595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.691711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.691737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.691820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.691847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.691937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.691964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.692053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.692079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.692168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.692194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.692285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.692311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.692401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.692427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.692527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.692566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.692663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.692692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.692808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.692834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.692938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.692966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.693067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.693093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.693184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.693216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.693306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.693333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.693451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.693476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.693574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.693603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.693725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.693752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.693865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.693912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.694007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.694033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.694121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.694146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.694239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.694263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.694347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.694372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.694484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.694509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.694594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.694619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.694702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.694730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.694821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.694848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.694973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.695002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.695099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.695126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.695216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.695242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.695330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.695356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.695447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.695475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.695573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.695599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.695680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.695707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.695794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.695821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.695933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.695961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.696052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.696079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.696173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.696200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.696285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.696311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.696397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.696423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.696534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.696559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.696648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.696676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.696768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.696794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.696885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.892 [2024-07-14 14:10:04.696913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.892 qpair failed and we were unable to recover it. 00:34:26.892 [2024-07-14 14:10:04.697003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.697029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.697126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.697152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.697266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.697291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.697386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.697412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.697495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.697520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.697601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.697627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.697712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.697737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.697816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.697842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.697934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.697962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.698058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.698085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.698173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.698199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.698282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.698308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.698391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.698416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.698505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.698530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.698617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.698648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.698737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.698763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.698858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.698895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.698987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.699013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.699101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.699129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.699241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.699266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.699357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.699382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.699464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.699490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.699606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.699645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.699744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.699771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.699864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.699899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.699986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.700012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.700106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.700132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.700212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.700237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.700348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.700374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.700494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.700522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.700615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.700642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.700736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.700763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.700863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.700894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.700997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.701021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.701109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.701134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.701225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.701250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.701330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.701355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.701466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.701494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.701586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.701613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.701708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.701737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.701826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.701855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.701957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.701987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.702072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.702097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.702187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.702212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.702296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.702321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.702443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.702469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.702553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.702578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.702676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.702715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.702804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.702831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.702943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.702970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.703092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.703117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.703210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.703235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.703325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.703350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.703441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.703466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.703556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.703581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.703670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.703698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.703788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.703814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.703935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.703961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.704046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.704071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.704155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.704180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.704258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.704284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.704399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.893 [2024-07-14 14:10:04.704426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.893 qpair failed and we were unable to recover it. 00:34:26.893 [2024-07-14 14:10:04.704540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.704565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.704648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.704673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.704753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.704778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.704902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.704928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.705021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.705046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.705123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.705148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.705237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.705265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.705362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.705389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.705482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.705510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.705598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.705623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.705721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.705761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.705863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.705898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.705986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.706013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.706117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.706144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.706235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.706262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.706351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.706377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.706471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.706498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.706619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.706646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.706736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.706763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.706857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.706895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.706986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.707011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.707097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.707122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.707224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.707249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.707333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.707358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.707448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.707473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.707590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.707615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.707706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.707731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.707817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.707842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.707934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.707960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.708051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.708076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.708193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.708217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.708298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.708323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.708412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.708436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.708521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.708546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.708676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.708715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.708849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.708901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.709008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.709036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.709135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.709162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.709256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.709282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.709370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.709395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.709473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.709499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.709586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.709613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.709698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.709724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.709850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.709882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.709966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.709991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.710078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.710103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.710187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.710216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.710325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.710350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.710448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.710473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.710580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.710605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.710684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.710709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.710797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.710826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.894 [2024-07-14 14:10:04.710916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.894 [2024-07-14 14:10:04.710942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.894 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.711034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.711074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.711174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.711199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.711316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.711341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.711459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.711484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.711572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.711597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.711682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.711710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.711805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.711831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.711937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.711963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.712091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.712117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.712202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.712229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.712314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.712340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.712436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.712464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.712568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.712593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.712677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.712702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.712808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.712833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.712924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.712951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.713037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.713062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.713147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.713172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.713256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.713281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.713359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.713385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.713496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.713525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.713646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.713675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.713778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.713817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.713919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.713947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.714032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.714058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.714142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.714167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.714257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.714284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.714368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.714395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.714479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.714506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.714593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.714619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.714704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.714729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.714828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.714855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.714951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.714978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.715070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.715097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.715228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.715254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.715336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.715361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.715444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.715469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.715559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.715585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.715668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.715694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.715782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.715808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.715909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.715935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.716029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.716055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.716143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.716168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.716258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.716284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.716389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.716419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.716505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.716531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.716642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.716669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.716754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.716785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.716869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.716902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.717011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.717037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.717127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.717153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.717240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.717266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.717344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.717369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.717450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.717475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.717605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.717644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.717744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.717783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.717872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.717906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.718003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.718029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.718111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.718137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.895 qpair failed and we were unable to recover it. 00:34:26.895 [2024-07-14 14:10:04.718223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.895 [2024-07-14 14:10:04.718250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.718361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.718387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.718495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.718524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.718618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.718646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.718745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.718772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.718860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.718895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.718989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.719014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.719099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.719124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.719239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.719265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.719353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.719378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.719469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.719496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.719589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.719617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.719718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.719757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.719897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.719926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.720011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.720038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.720132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.720159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.720264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.720291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.720404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.720430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.720518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.720544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.720640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.720666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.720767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.720805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.720901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.720928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.721013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.721038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.721132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.721157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.721279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.721304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.721391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.721416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.721501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.721528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.721621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.721647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.721734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.721765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.721851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.721884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.721971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.721996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.722078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.722103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.722187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.722212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.722292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.722317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.722398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.722423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.722531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.722555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.722648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.722673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.722761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.722786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.722869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.722901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.722988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.723013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.723103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.723128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.723212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.723238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.723327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.723353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.723475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.723500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.723610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.723635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.723720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.723745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.723829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.723854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.723977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.724015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.724121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.724159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.724256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.724282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.724397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.724423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.724541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.724567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.724663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.724702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.724793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.724819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.724913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.724940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.725020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.725050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.725159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.725184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.725277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.725303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.725389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.725417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.725505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.725531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.725642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.725668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.896 [2024-07-14 14:10:04.725752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.896 [2024-07-14 14:10:04.725777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.896 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.725881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.725921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.726016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.726043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.726139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.726165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.726253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.726278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.726367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.726392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.726472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.726497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.726581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.726607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.726733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.726760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.726897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.726924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.727036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.727061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.727149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.727176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.727264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.727290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.727377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.727403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.727512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.727537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.727623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.727648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.727729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.727754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.727836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.727861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.727964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.727990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.728070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.728095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.728219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.728244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.728335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.728365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.728454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.728481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.728563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.728589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.728677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.728706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.728798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.728825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.728923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.728948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.729037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.729062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.729157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.729183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.729276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.729303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.729394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.729420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.729537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.729563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.729661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.729702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.729802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.729828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.729917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.729942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.730024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.730049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.730135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.730161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.730248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.730272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.730362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.730389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.730478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.730505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.730599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.730627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.730710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.730736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.730870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.730908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.731004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.731031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.731131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.731157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.731245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.731270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.731357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.731383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.731502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.731528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.731632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.731672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.731822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.731848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.731977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.732003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.732090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.732115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.732199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.732224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.732305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.732330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.732415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.732440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.732532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.732557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.732670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.732695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.897 [2024-07-14 14:10:04.732775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.897 [2024-07-14 14:10:04.732800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.897 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.732900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.732926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.733015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.733040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.733133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.733158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.733272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.733297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.733414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.733439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.733529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.733556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.733649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.733675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.733779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.733818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.733919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.733946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.734041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.734067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.734155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.734181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.734267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.734293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.734385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.734411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.734498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.734525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.734612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.734637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.734731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.734757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.734856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.734890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.734988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.735014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.735104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.735130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.735221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.735248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.735359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.735385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.735495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.735520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.735609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.735635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.735758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.735797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.735921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.735948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.736039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.736065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.736158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.736184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.736269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.736294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.736385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.736413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.736514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.736541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.736637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.736672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.736767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.736792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.736917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.736944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.737027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.737053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.737143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.737169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.737255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.737280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.737360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.737385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.737497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.737522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.737600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.737624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.737703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.737727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.737817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.737845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.737940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.737967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.738097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.738125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.738281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.738307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.738398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.738423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.738537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.738563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.738645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.738670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.738755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.738780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.738868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.738903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.738994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.739019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.739098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.739124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.739238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.739263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.739345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.739371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.739463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.739488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.739589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.739618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.739736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.739764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.739848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.739873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.739989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.740020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.740109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.740134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.740215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.740240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.740323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.740347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.740432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.740459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.740557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.740595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.898 qpair failed and we were unable to recover it. 00:34:26.898 [2024-07-14 14:10:04.740680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.898 [2024-07-14 14:10:04.740707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.740821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.740846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.740947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.740973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.741053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.741078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.741163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.741189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.741278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.741303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.741384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.741408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.741491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.741516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.741642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.741680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.741771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.741797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.741890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.741917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.742038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.742064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.742158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.742183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.742273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.742301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.742386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.742412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.742537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.742562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.742654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.742681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.742771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.742796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.742885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.742911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.743001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.743026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.743107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.743131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.743221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.743246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.743334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.743359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.743441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.743469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.743591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.743616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.743708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.743736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.743826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.743851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.743946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.743971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.744052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.744077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.744162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.744189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.744282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.744307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.744386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.744411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.744494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.744519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.744605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.744629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.744738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.744762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.744853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.744888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.744972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.744997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.745075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.745100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.745178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.745203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.745312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.745337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.745413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.745438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.745525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.745552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.745651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.745690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.745783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.745811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.745900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.745926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.746021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.746047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.746158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.746183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.746278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.746304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.746417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.746443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.746525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.746550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.746633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.746658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.746768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.746792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.746885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.746915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.747001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.747027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.747124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.747150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.747229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.747255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.747366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.747394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.899 [2024-07-14 14:10:04.747487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.899 [2024-07-14 14:10:04.747513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.899 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.747602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.747628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.747712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.747738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.747825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.747850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.747939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.747970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.748059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.748084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.748173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.748198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.748276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.748301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.748389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.748414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.748503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.748530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.748641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.748666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.748749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.748776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.748894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.748922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.749008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.749034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.749118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.749143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.749284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.749310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.749413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.749451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.749537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.749564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.749667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.749694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.749783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.749808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.749895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.749921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.750005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.750030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.750114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.750139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.750223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.750248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.750329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.750354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.750443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.750468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.750565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.750603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.750693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.750720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.750805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.750832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.750923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.750950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.751038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.751064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.751161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.751188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.751278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.751303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.751392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.751419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.751507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.751532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.751616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.751642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.751743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.751782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.751864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.751895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.751976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.752001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.752088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.752113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.752199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.752224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.752307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.752334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.752418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.752445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.752532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.752560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.752648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.752674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.752765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.752790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.752891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.752917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.753003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.753029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.753113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.753138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.753247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.753272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.753353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.753381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.753506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.753532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.753621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.753646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.753733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.753758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.753840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.753865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.753961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.753986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.754068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.754093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.754172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.754197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.754280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.754305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.754386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.754412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.754500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.754529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.754625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.754665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.754767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.754793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.754883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.754910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.754993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.755018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.755108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.755137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.755230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.900 [2024-07-14 14:10:04.755255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.900 qpair failed and we were unable to recover it. 00:34:26.900 [2024-07-14 14:10:04.755351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.755376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.755465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.755491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.755592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.755630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.755722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.755748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.755838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.755871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.755970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.755996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.756088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.756113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.756201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.756227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.756307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.756336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.756429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.756454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.756575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.756602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.756692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.756717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.756796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.756820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.756914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.756940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.757028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.757053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.757142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.757167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.757249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.757274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.757412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.757437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.757529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.757554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.757635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.757660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.757872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.757904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.757988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.758014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.758111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.758137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.758246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.758271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.758390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.758415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:26.901 [2024-07-14 14:10:04.758497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.758526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:26.901 [2024-07-14 14:10:04.758617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.758644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:26.901 [2024-07-14 14:10:04.758742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.758769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.901 [2024-07-14 14:10:04.758859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.758892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.901 [2024-07-14 14:10:04.758981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.759014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.759102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.759128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.759214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.759240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.759322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.759347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.759432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.759457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.759548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.759575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.759702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.759727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.759809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.759836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.759952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.759977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.760059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.760084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.760163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.760188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.760284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.760310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.760393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.760418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.760509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.760534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.760630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.760658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.760764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.760803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.760921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.760961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.761058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.761085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.761207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.761232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.761312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.761337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.761429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.761463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.761557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.761583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.761673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.761702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.761795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.761835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.761967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.761995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.762081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.762107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.762204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.762230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.762317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.762349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.762435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.762462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.762546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.762571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.762682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.762708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.762797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.762822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.762920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.762946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.763031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.763056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.763140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.763176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.763261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.763285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.901 qpair failed and we were unable to recover it. 00:34:26.901 [2024-07-14 14:10:04.763368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.901 [2024-07-14 14:10:04.763393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.763521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.763546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.763627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.763651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.763734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.763759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.763849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.763882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.763984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.764011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.764096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.764122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.764255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.764282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.764371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.764397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.764514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.764542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.764623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.764650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.764742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.764767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.764897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.764923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.765007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.765032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.765126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.765165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.765288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.765315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.765401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.765426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.765511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.765537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.765653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.765681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.765773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.765800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.765895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.765922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.766003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.766029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.766112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.766136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.766222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.766251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.766334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.766359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.766446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.766470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.766569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.766594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.766689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.766715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.766815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.766853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.766965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.766993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.767082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.767107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.767198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.767225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.767351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.767376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.767462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.767488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.767575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.767600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.767690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.767717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.767817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.767842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.767978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.768006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.768095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.768120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.768213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.768239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.768328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.768356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.768469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.768495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.768594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.768634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.768725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.768751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.768831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.768856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.768964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.768990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.769074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.769099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.769184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.769210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.769323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.769348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.769437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.769462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.769552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.769578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.769664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.769689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.769769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.769794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.769980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.770009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.770108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.770134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.770224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.770249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.770334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.770359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.770443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.770468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.770589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.770618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.770737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.770762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.770864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.770911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.902 [2024-07-14 14:10:04.770997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.902 [2024-07-14 14:10:04.771022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.902 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.771104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.771130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.771225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.771251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.771361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.771386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.771513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.771540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.771627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.771653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.771741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.771767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.771851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.771886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.771978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.772003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.772092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.772116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.772213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.772239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.772352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.772378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.772462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.772488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.772581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.772606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.772721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.772746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.772889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.772915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.773004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.773032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.773153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.773180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.773272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.773297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.773389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.773414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.773525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.773550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.773637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.773663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.773747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.773776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.773904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.773929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.774015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.774044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.774129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.774154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.774287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.774313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.774397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.774421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.774511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.774536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.774617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.774642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.774752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.774777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.774859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.774900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.774984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.775009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.775093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.775119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.775241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.775266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.775346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.775371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.775454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.775479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.775590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.775615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.775709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.775734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.775815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.775841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.775946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.775971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.776061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.776086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.776211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.776239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:26.903 [2024-07-14 14:10:04.776342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.776368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:26.903 [2024-07-14 14:10:04.776469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.776508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.903 [2024-07-14 14:10:04.776604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.903 [2024-07-14 14:10:04.776632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.776756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.776783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.776880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.776906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.777024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.777049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.777134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.777164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.777277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.777302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.777389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.777413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.777493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.777518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.777600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.777625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.777700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.777725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.777811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.777836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.777934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.777962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.778045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.778071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.778159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.778188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.778277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.778303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.778416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.778442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.778532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.778558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.778671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.778697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.778827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.778867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.778973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.779001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.779098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.779124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.779221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.903 [2024-07-14 14:10:04.779250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.903 qpair failed and we were unable to recover it. 00:34:26.903 [2024-07-14 14:10:04.779369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.779397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.779485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.779512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.779606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.779633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.779717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.779742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.779851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.779882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.780023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.780048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.780133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.780158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.780238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.780263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.780349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.780376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.780461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.780494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.780576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.780604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.780720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.780745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.780852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.780882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.780975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.781000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.781085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.781110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.781207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.781232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.781313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.781338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.781426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.781452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.781529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.781554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.781650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.781689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.781778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.781804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.781899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.781928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.782014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.782040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.782142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.782177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.782268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.782294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.782408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.782434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.782525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.782553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.782638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.782665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.782756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.782783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.782891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.782917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.782999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.783024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.783114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.783139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.783257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.783282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.783367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.783392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.783471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.783496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.783576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.783601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.783688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.783716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.783804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.783830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.783932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.783958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.784078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.784103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.784188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.784213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.784300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.784325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.784412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.784439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.784523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.784550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.784640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.784668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.784774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.784799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.784908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.784934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.785027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.785053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.785141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.785167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.785263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.785294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.785380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.785406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.785531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.785556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.785642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.785669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.785800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.785839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.785944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.785972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.786072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.786110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.786230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.786256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.786347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.786372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.786453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.786478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.786561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.786589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.786719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.786758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.786853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.786887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.786984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.787010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.787106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.787132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.787234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.787261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.787350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.787378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.787461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.787488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.787580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.787608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.787700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.787725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.787812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.904 [2024-07-14 14:10:04.787837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.904 qpair failed and we were unable to recover it. 00:34:26.904 [2024-07-14 14:10:04.787946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.787972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.788068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.788093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.788179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.788205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.788301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.788327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.788443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.788469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.788560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.788587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.788670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.788701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.788787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.788814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.788912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.788938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.789022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.789047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.789135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.789161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.789304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.789330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.789416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.789444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.789532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.789558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.789659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.789686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.789773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.789798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.789895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.789922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.790006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.790031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.790118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.790144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.790238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.790263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.790383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.790409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.790506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.790532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.790618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.790644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.790778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.790817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.790920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.790949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.791034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.791060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.791144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.791170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.791265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.791297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.791395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.791421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.791511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.791537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.791630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.791655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.791742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.791768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.791885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.791913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.792006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.792033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.792119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.792144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.792236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.792261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.792395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.792420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.792508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.792533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.792619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.792646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.792739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.792765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.792896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.792923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.793033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.793059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.793144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.793169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.793254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.793280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.793401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.793427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.793519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.793544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.793629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.793658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.793742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.793767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.793849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.793875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.793967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.793992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.794083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.794108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.794226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.794251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.794333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.794358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.794444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.794472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.794562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.794588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.794690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.794729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.794822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.794848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.794954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.794993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.795080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.795107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.795241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.795266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.795359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.795384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.795471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.795496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.795590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.795619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.905 [2024-07-14 14:10:04.795705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.905 [2024-07-14 14:10:04.795732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.905 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.795846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.795871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.795964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.795989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.796074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.796099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.796217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.796247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.796334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.796358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.796447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.796475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.796567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.796594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.796683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.796709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.796828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.796853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.796957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.796985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.797072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.797098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.797187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.797214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.797347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.797372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.797463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.797489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.797576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.797603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.797730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.797768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.797874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.797926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.798015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.798042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.798134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.798161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.798261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.798286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 Malloc0 00:34:26.906 [2024-07-14 14:10:04.798376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.798403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.798485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.798510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.906 [2024-07-14 14:10:04.798608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.798647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:26.906 [2024-07-14 14:10:04.798795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.798844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.906 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.798948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.798975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.799060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.799086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.799183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.799208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.799303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.799328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.799430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.799455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.799549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.799577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.799701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.799730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.799821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.799849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.799956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.799982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.800074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.800099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.800205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.800233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.800323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.800349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.800440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.800467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.800555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.800581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.800667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.800694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.800813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.800841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.800942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.800968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.801078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.801103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.801222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.801248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.801333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.801359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.801449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.801474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.801563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.801590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.801692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.801730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.801823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.801851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.801963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.801990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.802039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.906 [2024-07-14 14:10:04.802075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.802100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.802232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.802257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.802351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.802376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.802456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.802482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.802568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.802593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.802678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.802703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.802806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.802833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.802926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.802952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.803037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.803064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.803157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.803183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.803268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.803294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.803381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.803408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.803503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.803531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.803614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.803639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.803718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.803743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.803829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.803854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.803945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.803970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.804080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.804107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.804210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.804235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.804326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.906 [2024-07-14 14:10:04.804352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.906 qpair failed and we were unable to recover it. 00:34:26.906 [2024-07-14 14:10:04.804441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.907 [2024-07-14 14:10:04.804467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.907 qpair failed and we were unable to recover it. 00:34:26.907 [2024-07-14 14:10:04.804556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.907 [2024-07-14 14:10:04.804582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.907 qpair failed and we were unable to recover it. 00:34:26.907 [2024-07-14 14:10:04.804704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.907 [2024-07-14 14:10:04.804729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.907 qpair failed and we were unable to recover it. 00:34:26.907 [2024-07-14 14:10:04.804853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.907 [2024-07-14 14:10:04.804884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:26.907 qpair failed and we were unable to recover it. 00:34:26.907 [2024-07-14 14:10:04.804987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.907 [2024-07-14 14:10:04.805015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.907 qpair failed and we were unable to recover it. 00:34:26.907 [2024-07-14 14:10:04.805104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.907 [2024-07-14 14:10:04.805130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.907 qpair failed and we were unable to recover it. 00:34:26.907 [2024-07-14 14:10:04.805241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.907 [2024-07-14 14:10:04.805266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.907 qpair failed and we were unable to recover it. 00:34:26.907 [2024-07-14 14:10:04.805359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.907 [2024-07-14 14:10:04.805384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:26.907 qpair failed and we were unable to recover it. 00:34:26.907 [2024-07-14 14:10:04.805485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.907 [2024-07-14 14:10:04.805523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:26.907 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.805631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.805660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.805780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.805805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.805903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.805929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.806016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.806041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.806127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.806152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.806274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.806302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.806390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.806416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.806503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.806528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.806613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.806638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.806762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.806794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.806895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.806921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.807046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.807072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.807194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.807219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.807304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.807329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.807412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.807437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.807552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.807577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.807667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.807692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.807785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.807811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.807904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.807932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.808021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.808048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.808134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.808160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.808251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.808276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.808359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.808385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.808482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.808509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.808608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.808647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.808787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.808825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.808935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.808962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.809061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.809089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.809171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.809197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.809317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.809344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.809440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.809467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.167 [2024-07-14 14:10:04.809557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.167 [2024-07-14 14:10:04.809586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.167 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.809673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.809699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.809780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.809806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.809904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.809931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.810020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.810045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.810147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.810173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.810249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.810276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.810360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.810386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.168 [2024-07-14 14:10:04.810474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.810500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.810584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:27.168 [2024-07-14 14:10:04.810609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.810695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.810721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b9 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.168 0 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.810811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:27.168 [2024-07-14 14:10:04.810836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.810933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.810959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.811050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.811076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.811159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.811184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.811264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.811289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.811375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.811401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.811492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.811517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.811618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.811658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.811780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.811807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.811913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.811939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.812035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.812062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.812148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.812180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.812273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.812299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.812412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.812437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.812532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.812565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.812677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.812715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.812837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.812863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.812965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.812990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.813077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.813102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.813197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.813225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.813345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.813373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.813459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.813485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.813575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.813602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.813697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.813723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.813807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.813833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.813937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.813963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.814043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.814068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.814160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.814185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.168 [2024-07-14 14:10:04.814296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.168 [2024-07-14 14:10:04.814321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.168 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.814409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.814434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.814564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.814603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.814696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.814723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.814831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.814869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.814991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.815018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.815107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.815132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.815219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.815244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.815358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.815385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.815480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.815508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.815599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.815626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.815772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.815798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.815903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.815930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.816016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.816042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.816128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.816154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.816271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.816298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.816412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.816438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.816523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.816549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.816641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.816668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.816806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.816844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.816951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.816979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.817076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.817104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.817206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.817232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.817352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.817378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.817470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.817495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.817578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.817605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.817682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.817708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.817793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.817821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.817922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.817950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.169 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:27.169 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.169 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:27.169 [2024-07-14 14:10:04.818751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.818786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.818920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.818949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.819075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.819102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.819197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.819223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.819324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.819350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.819466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.819492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.819578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.819606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.819707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.819745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.819871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.819904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.819993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.820018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.169 qpair failed and we were unable to recover it. 00:34:27.169 [2024-07-14 14:10:04.820108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.169 [2024-07-14 14:10:04.820133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.820218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.820243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.820327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.820355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.820475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.820501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.820597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.820636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.820730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.820756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.820848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.820881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.820979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.821005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.821092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.821118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.821206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.821232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.821317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.821344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.821434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.821462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.821560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.821587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.821676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.821702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.821785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.821809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.821906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.821932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.822018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.822043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.822151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.822182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.822268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.822292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.822411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.822439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.822532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.822558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.822671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.822697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.822783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.822810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.822916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.822942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.823023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.823049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.823163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.823189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.823279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.823304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.823417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.823443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.823527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.823553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.823639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.823664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.823752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.823777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.823881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.823907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.823991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.824016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.824103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.824128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.824207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.824232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.824318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.824343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.824427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.824451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.824571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.824599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.824693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.824718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.824841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.824892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.824987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.170 [2024-07-14 14:10:04.825013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.170 qpair failed and we were unable to recover it. 00:34:27.170 [2024-07-14 14:10:04.825100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.825126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 [2024-07-14 14:10:04.825217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.825242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 [2024-07-14 14:10:04.825329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.825355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 [2024-07-14 14:10:04.825436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.825467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 [2024-07-14 14:10:04.825567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.825607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 [2024-07-14 14:10:04.825698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.825725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 [2024-07-14 14:10:04.825827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.825866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 [2024-07-14 14:10:04.825976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.826004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 [2024-07-14 14:10:04.826090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.826116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 [2024-07-14 14:10:04.826215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.826241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.171 [2024-07-14 14:10:04.826332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.826357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:27.171 [2024-07-14 14:10:04.826446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.826473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.171 [2024-07-14 14:10:04.826584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.826610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:27.171 [2024-07-14 14:10:04.826721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.826747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.171 qpair failed and we were unable to recover it. 00:34:27.171 [2024-07-14 14:10:04.826840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.171 [2024-07-14 14:10:04.826867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.172 qpair failed and we were unable to recover it. 00:34:27.172 [2024-07-14 14:10:04.826971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.172 [2024-07-14 14:10:04.826997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.172 qpair failed and we were unable to recover it. 00:34:27.172 [2024-07-14 14:10:04.827084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.172 [2024-07-14 14:10:04.827109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.172 qpair failed and we were unable to recover it. 00:34:27.172 [2024-07-14 14:10:04.827194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.172 [2024-07-14 14:10:04.827219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.827302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.827327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.827437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.827462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.827557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.827589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.827685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.827712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.827857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.827891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.827981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.828007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.828094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.828121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.828206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.828232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.828313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.828338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.828422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.828447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.828565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.828594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.828681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.828708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.828830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.828859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.828954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.828982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.829068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.829094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.829178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.829204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.829287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.829312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.829397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.829422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.829510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.829536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc428000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.829636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.829676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc430000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.829764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.829792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc438000b90 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.829891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.829918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.830006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.830031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.830115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.173 [2024-07-14 14:10:04.830141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce0840 with addr=10.0.0.2, port=4420 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.830283] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:27.173 [2024-07-14 14:10:04.832746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.173 [2024-07-14 14:10:04.832908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.173 [2024-07-14 14:10:04.832936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.173 [2024-07-14 14:10:04.832952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.173 [2024-07-14 14:10:04.832965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.173 [2024-07-14 14:10:04.832998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.173 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:27.173 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.173 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:27.173 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.173 14:10:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1612874 00:34:27.173 [2024-07-14 14:10:04.842587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.173 [2024-07-14 14:10:04.842690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.173 [2024-07-14 14:10:04.842717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.173 [2024-07-14 14:10:04.842731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.173 [2024-07-14 14:10:04.842744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.173 [2024-07-14 14:10:04.842772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.852610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.173 [2024-07-14 14:10:04.852702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.173 [2024-07-14 14:10:04.852728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.173 [2024-07-14 14:10:04.852743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.173 [2024-07-14 14:10:04.852756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.173 [2024-07-14 14:10:04.852785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.862546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.173 [2024-07-14 14:10:04.862643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.173 [2024-07-14 14:10:04.862670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.173 [2024-07-14 14:10:04.862690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.173 [2024-07-14 14:10:04.862703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.173 [2024-07-14 14:10:04.862733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.173 qpair failed and we were unable to recover it. 00:34:27.173 [2024-07-14 14:10:04.872594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.173 [2024-07-14 14:10:04.872688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.173 [2024-07-14 14:10:04.872714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.173 [2024-07-14 14:10:04.872728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.173 [2024-07-14 14:10:04.872741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.872770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.882680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.882773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.882799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.882814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.882827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.882855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.892708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.892801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.892828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.892842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.892857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.892895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.902634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.902728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.902754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.902768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.902781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.902809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.912682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.912773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.912799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.912814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.912827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.912854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.922691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.922777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.922803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.922817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.922830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.922858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.932720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.932809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.932834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.932848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.932861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.932897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.942778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.942904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.942930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.942944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.942956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.942985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.952821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.952921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.952953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.952967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.952980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.953009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.962835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.962924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.962949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.962963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.962976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.963005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.972866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.972959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.972984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.972999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.973011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.973039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.982951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.983044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.983069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.983084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.983097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.983124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:04.992932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:04.993033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:04.993059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:04.993072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:04.993085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:04.993120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:05.002940] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:05.003038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.174 [2024-07-14 14:10:05.003063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.174 [2024-07-14 14:10:05.003077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.174 [2024-07-14 14:10:05.003090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.174 [2024-07-14 14:10:05.003118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.174 qpair failed and we were unable to recover it. 00:34:27.174 [2024-07-14 14:10:05.012976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.174 [2024-07-14 14:10:05.013065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.013091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.013105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.013118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.013146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.022991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.023090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.023115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.023129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.023142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.023170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.033032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.033120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.033145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.033159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.033172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.033200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.043048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.043136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.043166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.043181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.043194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.043222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.053110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.053212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.053240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.053256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.053269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.053299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.063100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.063194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.063220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.063235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.063248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.063276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.073195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.073288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.073315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.073329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.073342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.073371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.083169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.083264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.083290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.083306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.083319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.083353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.093223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.093337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.093362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.093376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.093390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.093418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.103207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.103301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.103327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.103341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.103354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.103382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.113359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.113450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.113475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.113489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.113502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.113530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.123313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.123402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.123427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.123441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.123454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.123482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.133357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.133444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.133475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.133490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.133504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.133531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.175 [2024-07-14 14:10:05.143405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.175 [2024-07-14 14:10:05.143518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.175 [2024-07-14 14:10:05.143545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.175 [2024-07-14 14:10:05.143559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.175 [2024-07-14 14:10:05.143571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.175 [2024-07-14 14:10:05.143600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.175 qpair failed and we were unable to recover it. 00:34:27.434 [2024-07-14 14:10:05.153374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.434 [2024-07-14 14:10:05.153466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.434 [2024-07-14 14:10:05.153495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.434 [2024-07-14 14:10:05.153513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.434 [2024-07-14 14:10:05.153527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.434 [2024-07-14 14:10:05.153555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.434 qpair failed and we were unable to recover it. 00:34:27.434 [2024-07-14 14:10:05.163397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.434 [2024-07-14 14:10:05.163489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.434 [2024-07-14 14:10:05.163515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.434 [2024-07-14 14:10:05.163530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.434 [2024-07-14 14:10:05.163543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.434 [2024-07-14 14:10:05.163571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.434 qpair failed and we were unable to recover it. 00:34:27.434 [2024-07-14 14:10:05.173389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.434 [2024-07-14 14:10:05.173478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.434 [2024-07-14 14:10:05.173504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.434 [2024-07-14 14:10:05.173518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.434 [2024-07-14 14:10:05.173531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.434 [2024-07-14 14:10:05.173565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.183467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.183554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.183579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.183594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.183606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.183634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.193462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.193565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.193590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.193604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.193617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.193645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.203524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.203627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.203652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.203666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.203680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.203707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.213522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.213610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.213635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.213649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.213662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.213690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.223586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.223683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.223714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.223728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.223742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.223769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.233574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.233666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.233692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.233706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.233719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.233747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.243636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.243730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.243755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.243769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.243782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.243809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.253639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.253768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.253794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.253808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.253821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.253849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.263759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.263851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.263882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.263898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.263917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.263947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.273682] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.273775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.273799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.273815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.273828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.273858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.283807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.283917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.283943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.283957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.283970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.283999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.293729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.293812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.293837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.293851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.293864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.293900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.303829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.303961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.303987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.304001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.304013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.304042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.313807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.435 [2024-07-14 14:10:05.313907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.435 [2024-07-14 14:10:05.313933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.435 [2024-07-14 14:10:05.313947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.435 [2024-07-14 14:10:05.313960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.435 [2024-07-14 14:10:05.313988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.435 qpair failed and we were unable to recover it. 00:34:27.435 [2024-07-14 14:10:05.323840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.436 [2024-07-14 14:10:05.323938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.436 [2024-07-14 14:10:05.323964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.436 [2024-07-14 14:10:05.323978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.436 [2024-07-14 14:10:05.323991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.436 [2024-07-14 14:10:05.324019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.436 qpair failed and we were unable to recover it. 00:34:27.436 [2024-07-14 14:10:05.333858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.436 [2024-07-14 14:10:05.333948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.436 [2024-07-14 14:10:05.333974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.436 [2024-07-14 14:10:05.333988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.436 [2024-07-14 14:10:05.334001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.436 [2024-07-14 14:10:05.334029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.436 qpair failed and we were unable to recover it. 00:34:27.436 [2024-07-14 14:10:05.343984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.436 [2024-07-14 14:10:05.344081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.436 [2024-07-14 14:10:05.344106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.436 [2024-07-14 14:10:05.344120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.436 [2024-07-14 14:10:05.344133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.436 [2024-07-14 14:10:05.344161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.436 qpair failed and we were unable to recover it. 00:34:27.436 [2024-07-14 14:10:05.353941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.436 [2024-07-14 14:10:05.354039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.436 [2024-07-14 14:10:05.354064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.436 [2024-07-14 14:10:05.354078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.436 [2024-07-14 14:10:05.354099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.436 [2024-07-14 14:10:05.354128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.436 qpair failed and we were unable to recover it. 00:34:27.436 [2024-07-14 14:10:05.363967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.436 [2024-07-14 14:10:05.364053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.436 [2024-07-14 14:10:05.364079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.436 [2024-07-14 14:10:05.364093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.436 [2024-07-14 14:10:05.364106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.436 [2024-07-14 14:10:05.364134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.436 qpair failed and we were unable to recover it. 00:34:27.436 [2024-07-14 14:10:05.373986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.436 [2024-07-14 14:10:05.374078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.436 [2024-07-14 14:10:05.374104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.436 [2024-07-14 14:10:05.374118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.436 [2024-07-14 14:10:05.374131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.436 [2024-07-14 14:10:05.374158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.436 qpair failed and we were unable to recover it. 00:34:27.436 [2024-07-14 14:10:05.384016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.436 [2024-07-14 14:10:05.384106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.436 [2024-07-14 14:10:05.384132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.436 [2024-07-14 14:10:05.384146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.436 [2024-07-14 14:10:05.384159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.436 [2024-07-14 14:10:05.384188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.436 qpair failed and we were unable to recover it. 00:34:27.436 [2024-07-14 14:10:05.394058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.436 [2024-07-14 14:10:05.394156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.436 [2024-07-14 14:10:05.394182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.436 [2024-07-14 14:10:05.394196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.436 [2024-07-14 14:10:05.394209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.436 [2024-07-14 14:10:05.394237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.436 qpair failed and we were unable to recover it. 00:34:27.436 [2024-07-14 14:10:05.404066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.436 [2024-07-14 14:10:05.404196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.436 [2024-07-14 14:10:05.404222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.436 [2024-07-14 14:10:05.404237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.436 [2024-07-14 14:10:05.404250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.436 [2024-07-14 14:10:05.404277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.436 qpair failed and we were unable to recover it. 00:34:27.436 [2024-07-14 14:10:05.414145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.436 [2024-07-14 14:10:05.414235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.436 [2024-07-14 14:10:05.414260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.436 [2024-07-14 14:10:05.414275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.436 [2024-07-14 14:10:05.414288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.436 [2024-07-14 14:10:05.414315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.436 qpair failed and we were unable to recover it. 00:34:27.696 [2024-07-14 14:10:05.424129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.696 [2024-07-14 14:10:05.424224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.696 [2024-07-14 14:10:05.424250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.696 [2024-07-14 14:10:05.424264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.696 [2024-07-14 14:10:05.424278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.696 [2024-07-14 14:10:05.424305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.696 qpair failed and we were unable to recover it. 00:34:27.696 [2024-07-14 14:10:05.434146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.696 [2024-07-14 14:10:05.434239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.696 [2024-07-14 14:10:05.434265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.696 [2024-07-14 14:10:05.434279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.696 [2024-07-14 14:10:05.434291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.696 [2024-07-14 14:10:05.434318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.696 qpair failed and we were unable to recover it. 00:34:27.696 [2024-07-14 14:10:05.444162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.696 [2024-07-14 14:10:05.444253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.696 [2024-07-14 14:10:05.444279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.696 [2024-07-14 14:10:05.444299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.696 [2024-07-14 14:10:05.444313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.696 [2024-07-14 14:10:05.444343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.696 qpair failed and we were unable to recover it. 00:34:27.696 [2024-07-14 14:10:05.454184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.696 [2024-07-14 14:10:05.454276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.696 [2024-07-14 14:10:05.454301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.696 [2024-07-14 14:10:05.454315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.696 [2024-07-14 14:10:05.454328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.696 [2024-07-14 14:10:05.454356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.696 qpair failed and we were unable to recover it. 00:34:27.696 [2024-07-14 14:10:05.464219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.696 [2024-07-14 14:10:05.464310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.696 [2024-07-14 14:10:05.464335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.696 [2024-07-14 14:10:05.464349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.464362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.464390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.474324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.474430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.474455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.474468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.474480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.474507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.484348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.484439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.484465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.484479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.484492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.484521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.494287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.494376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.494401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.494414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.494427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.494455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.504344] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.504439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.504465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.504478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.504491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.504519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.514437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.514524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.514550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.514564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.514577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.514605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.524375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.524469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.524494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.524508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.524522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.524549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.534419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.534501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.534526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.534547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.534560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.534589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.544463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.544556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.544582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.544596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.544609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.544637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.554476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.554568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.554593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.554607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.554620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.554650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.564634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.564767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.564793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.564807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.564820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.564848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.574513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.574604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.574630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.574644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.574657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.574684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.584593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.584690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.584718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.584734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.584747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.584776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.594681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.594775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.594801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.594814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.594827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.594855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.697 [2024-07-14 14:10:05.604604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.697 [2024-07-14 14:10:05.604691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.697 [2024-07-14 14:10:05.604717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.697 [2024-07-14 14:10:05.604731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.697 [2024-07-14 14:10:05.604744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.697 [2024-07-14 14:10:05.604771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.697 qpair failed and we were unable to recover it. 00:34:27.698 [2024-07-14 14:10:05.614626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.698 [2024-07-14 14:10:05.614718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.698 [2024-07-14 14:10:05.614743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.698 [2024-07-14 14:10:05.614757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.698 [2024-07-14 14:10:05.614770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.698 [2024-07-14 14:10:05.614798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.698 qpair failed and we were unable to recover it. 00:34:27.698 [2024-07-14 14:10:05.624706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.698 [2024-07-14 14:10:05.624806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.698 [2024-07-14 14:10:05.624831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.698 [2024-07-14 14:10:05.624852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.698 [2024-07-14 14:10:05.624865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.698 [2024-07-14 14:10:05.624901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.698 qpair failed and we were unable to recover it. 00:34:27.698 [2024-07-14 14:10:05.634705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.698 [2024-07-14 14:10:05.634796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.698 [2024-07-14 14:10:05.634821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.698 [2024-07-14 14:10:05.634835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.698 [2024-07-14 14:10:05.634848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.698 [2024-07-14 14:10:05.634882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.698 qpair failed and we were unable to recover it. 00:34:27.698 [2024-07-14 14:10:05.644714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.698 [2024-07-14 14:10:05.644799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.698 [2024-07-14 14:10:05.644825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.698 [2024-07-14 14:10:05.644839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.698 [2024-07-14 14:10:05.644852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.698 [2024-07-14 14:10:05.644887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.698 qpair failed and we were unable to recover it. 00:34:27.698 [2024-07-14 14:10:05.654912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.698 [2024-07-14 14:10:05.655011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.698 [2024-07-14 14:10:05.655036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.698 [2024-07-14 14:10:05.655050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.698 [2024-07-14 14:10:05.655064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.698 [2024-07-14 14:10:05.655091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.698 qpair failed and we were unable to recover it. 00:34:27.698 [2024-07-14 14:10:05.664859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.698 [2024-07-14 14:10:05.664966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.698 [2024-07-14 14:10:05.664992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.698 [2024-07-14 14:10:05.665006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.698 [2024-07-14 14:10:05.665019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.698 [2024-07-14 14:10:05.665046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.698 qpair failed and we were unable to recover it. 00:34:27.698 [2024-07-14 14:10:05.674911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.698 [2024-07-14 14:10:05.675007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.698 [2024-07-14 14:10:05.675033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.698 [2024-07-14 14:10:05.675047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.698 [2024-07-14 14:10:05.675059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.698 [2024-07-14 14:10:05.675087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.698 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.684986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.971 [2024-07-14 14:10:05.685085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.971 [2024-07-14 14:10:05.685111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.971 [2024-07-14 14:10:05.685125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.971 [2024-07-14 14:10:05.685139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.971 [2024-07-14 14:10:05.685166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.971 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.694892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.971 [2024-07-14 14:10:05.694982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.971 [2024-07-14 14:10:05.695007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.971 [2024-07-14 14:10:05.695021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.971 [2024-07-14 14:10:05.695034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.971 [2024-07-14 14:10:05.695062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.971 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.704915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.971 [2024-07-14 14:10:05.705010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.971 [2024-07-14 14:10:05.705035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.971 [2024-07-14 14:10:05.705049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.971 [2024-07-14 14:10:05.705062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.971 [2024-07-14 14:10:05.705090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.971 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.715027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.971 [2024-07-14 14:10:05.715119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.971 [2024-07-14 14:10:05.715149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.971 [2024-07-14 14:10:05.715164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.971 [2024-07-14 14:10:05.715177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.971 [2024-07-14 14:10:05.715205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.971 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.724989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.971 [2024-07-14 14:10:05.725108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.971 [2024-07-14 14:10:05.725134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.971 [2024-07-14 14:10:05.725148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.971 [2024-07-14 14:10:05.725161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.971 [2024-07-14 14:10:05.725188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.971 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.734986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.971 [2024-07-14 14:10:05.735113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.971 [2024-07-14 14:10:05.735137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.971 [2024-07-14 14:10:05.735152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.971 [2024-07-14 14:10:05.735165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.971 [2024-07-14 14:10:05.735193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.971 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.745074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.971 [2024-07-14 14:10:05.745195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.971 [2024-07-14 14:10:05.745220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.971 [2024-07-14 14:10:05.745234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.971 [2024-07-14 14:10:05.745247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.971 [2024-07-14 14:10:05.745274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.971 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.755163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.971 [2024-07-14 14:10:05.755252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.971 [2024-07-14 14:10:05.755277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.971 [2024-07-14 14:10:05.755291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.971 [2024-07-14 14:10:05.755305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.971 [2024-07-14 14:10:05.755338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.971 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.765101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.971 [2024-07-14 14:10:05.765184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.971 [2024-07-14 14:10:05.765209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.971 [2024-07-14 14:10:05.765223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.971 [2024-07-14 14:10:05.765236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.971 [2024-07-14 14:10:05.765264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.971 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.775138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.971 [2024-07-14 14:10:05.775264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.971 [2024-07-14 14:10:05.775289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.971 [2024-07-14 14:10:05.775303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.971 [2024-07-14 14:10:05.775316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.971 [2024-07-14 14:10:05.775345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.971 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.785159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.971 [2024-07-14 14:10:05.785250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.971 [2024-07-14 14:10:05.785275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.971 [2024-07-14 14:10:05.785289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.971 [2024-07-14 14:10:05.785302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.971 [2024-07-14 14:10:05.785330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.971 qpair failed and we were unable to recover it. 00:34:27.971 [2024-07-14 14:10:05.795198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.795289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.795315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.795329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.795342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.795369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.805292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.805428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.805459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.805474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.805487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.805514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.815255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.815381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.815407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.815421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.815433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.815462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.825309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.825451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.825479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.825494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.825508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.825537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.835288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.835372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.835398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.835412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.835425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.835453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.845308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.845392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.845418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.845432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.845444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.845478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.855356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.855440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.855466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.855479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.855492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.855520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.865386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.865478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.865503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.865517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.865530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.865558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.875394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.875521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.875546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.875560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.875573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.875601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.885409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.885495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.885521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.885535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.885548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.885576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.895448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.895574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.895604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.895619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.895632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.895660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.905490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.905587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.905613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.905627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.905640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.905668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.915492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.915583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.915608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.915621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.915634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.915663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.925565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.925653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.925678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.925692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.925704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.925734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.935635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.935773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.935799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.935813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.935825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.935858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:27.972 [2024-07-14 14:10:05.945578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.972 [2024-07-14 14:10:05.945669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.972 [2024-07-14 14:10:05.945694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.972 [2024-07-14 14:10:05.945707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.972 [2024-07-14 14:10:05.945720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:27.972 [2024-07-14 14:10:05.945748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:27.972 qpair failed and we were unable to recover it. 00:34:28.230 [2024-07-14 14:10:05.955677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.230 [2024-07-14 14:10:05.955767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.230 [2024-07-14 14:10:05.955793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.230 [2024-07-14 14:10:05.955806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.230 [2024-07-14 14:10:05.955820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.230 [2024-07-14 14:10:05.955848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.230 qpair failed and we were unable to recover it. 00:34:28.230 [2024-07-14 14:10:05.965629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.230 [2024-07-14 14:10:05.965715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.230 [2024-07-14 14:10:05.965741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.230 [2024-07-14 14:10:05.965755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.230 [2024-07-14 14:10:05.965767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.230 [2024-07-14 14:10:05.965796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.230 qpair failed and we were unable to recover it. 00:34:28.230 [2024-07-14 14:10:05.975700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.230 [2024-07-14 14:10:05.975819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.230 [2024-07-14 14:10:05.975848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.230 [2024-07-14 14:10:05.975863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.230 [2024-07-14 14:10:05.975883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.230 [2024-07-14 14:10:05.975917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.230 qpair failed and we were unable to recover it. 00:34:28.230 [2024-07-14 14:10:05.985771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.230 [2024-07-14 14:10:05.985871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.230 [2024-07-14 14:10:05.985909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.230 [2024-07-14 14:10:05.985924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.230 [2024-07-14 14:10:05.985938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.230 [2024-07-14 14:10:05.985966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.230 qpair failed and we were unable to recover it. 00:34:28.230 [2024-07-14 14:10:05.995704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.230 [2024-07-14 14:10:05.995794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.230 [2024-07-14 14:10:05.995819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.230 [2024-07-14 14:10:05.995833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.230 [2024-07-14 14:10:05.995846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.230 [2024-07-14 14:10:05.995874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.230 qpair failed and we were unable to recover it. 00:34:28.230 [2024-07-14 14:10:06.005742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.230 [2024-07-14 14:10:06.005839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.230 [2024-07-14 14:10:06.005865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.230 [2024-07-14 14:10:06.005888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.230 [2024-07-14 14:10:06.005903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.230 [2024-07-14 14:10:06.005931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.230 qpair failed and we were unable to recover it. 00:34:28.230 [2024-07-14 14:10:06.015762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.230 [2024-07-14 14:10:06.015843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.230 [2024-07-14 14:10:06.015868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.015890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.015905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.015933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.025901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.026018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.026043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.026057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.026075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.026103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.035809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.035900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.035925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.035940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.035953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.035981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.045850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.045946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.045972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.045986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.045999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.046027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.055873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.056009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.056034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.056048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.056061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.056089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.065937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.066033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.066059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.066073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.066086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.066114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.075946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.076049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.076075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.076089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.076102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.076130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.085981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.086067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.086093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.086107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.086120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.086147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.095985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.096069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.096095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.096109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.096122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.096149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.106036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.106127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.106154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.106168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.106181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.106209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.116146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.116234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.116259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.116273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.116295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.116323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.126102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.126181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.126207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.126220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.126233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.126261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.136113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.136248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.136274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.136288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.136300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.136328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.146177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.146284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.146310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.146324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.146337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.231 [2024-07-14 14:10:06.146364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.231 qpair failed and we were unable to recover it. 00:34:28.231 [2024-07-14 14:10:06.156177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.231 [2024-07-14 14:10:06.156297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.231 [2024-07-14 14:10:06.156322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.231 [2024-07-14 14:10:06.156337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.231 [2024-07-14 14:10:06.156349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.232 [2024-07-14 14:10:06.156377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.232 qpair failed and we were unable to recover it. 00:34:28.232 [2024-07-14 14:10:06.166220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.232 [2024-07-14 14:10:06.166354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.232 [2024-07-14 14:10:06.166380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.232 [2024-07-14 14:10:06.166394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.232 [2024-07-14 14:10:06.166407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.232 [2024-07-14 14:10:06.166435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.232 qpair failed and we were unable to recover it. 00:34:28.232 [2024-07-14 14:10:06.176206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.232 [2024-07-14 14:10:06.176295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.232 [2024-07-14 14:10:06.176320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.232 [2024-07-14 14:10:06.176333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.232 [2024-07-14 14:10:06.176346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.232 [2024-07-14 14:10:06.176374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.232 qpair failed and we were unable to recover it. 00:34:28.232 [2024-07-14 14:10:06.186346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.232 [2024-07-14 14:10:06.186457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.232 [2024-07-14 14:10:06.186482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.232 [2024-07-14 14:10:06.186496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.232 [2024-07-14 14:10:06.186509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.232 [2024-07-14 14:10:06.186537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.232 qpair failed and we were unable to recover it. 00:34:28.232 [2024-07-14 14:10:06.196309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.232 [2024-07-14 14:10:06.196401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.232 [2024-07-14 14:10:06.196426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.232 [2024-07-14 14:10:06.196440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.232 [2024-07-14 14:10:06.196453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.232 [2024-07-14 14:10:06.196480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.232 qpair failed and we were unable to recover it. 00:34:28.232 [2024-07-14 14:10:06.206361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.232 [2024-07-14 14:10:06.206457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.232 [2024-07-14 14:10:06.206483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.232 [2024-07-14 14:10:06.206503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.232 [2024-07-14 14:10:06.206517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.232 [2024-07-14 14:10:06.206546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.232 qpair failed and we were unable to recover it. 00:34:28.489 [2024-07-14 14:10:06.216384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.489 [2024-07-14 14:10:06.216476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.489 [2024-07-14 14:10:06.216502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.489 [2024-07-14 14:10:06.216515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.489 [2024-07-14 14:10:06.216529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.489 [2024-07-14 14:10:06.216556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.489 qpair failed and we were unable to recover it. 00:34:28.489 [2024-07-14 14:10:06.226408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.489 [2024-07-14 14:10:06.226503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.489 [2024-07-14 14:10:06.226528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.489 [2024-07-14 14:10:06.226542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.489 [2024-07-14 14:10:06.226555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.489 [2024-07-14 14:10:06.226585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.489 qpair failed and we were unable to recover it. 00:34:28.489 [2024-07-14 14:10:06.236404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.489 [2024-07-14 14:10:06.236497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.489 [2024-07-14 14:10:06.236523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.489 [2024-07-14 14:10:06.236537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.489 [2024-07-14 14:10:06.236550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.489 [2024-07-14 14:10:06.236578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.489 qpair failed and we were unable to recover it. 00:34:28.489 [2024-07-14 14:10:06.246484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.489 [2024-07-14 14:10:06.246596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.489 [2024-07-14 14:10:06.246622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.489 [2024-07-14 14:10:06.246637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.489 [2024-07-14 14:10:06.246651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.489 [2024-07-14 14:10:06.246679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.489 qpair failed and we were unable to recover it. 00:34:28.489 [2024-07-14 14:10:06.256454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.489 [2024-07-14 14:10:06.256588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.256613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.256627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.256640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.256667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.266576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.266677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.266703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.266717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.266730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.266757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.276539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.276668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.276693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.276708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.276721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.276749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.286557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.286660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.286686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.286700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.286714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.286741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.296617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.296739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.296765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.296785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.296799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.296827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.306613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.306707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.306732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.306746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.306760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.306787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.316640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.316728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.316754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.316771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.316785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.316813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.326663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.326753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.326779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.326793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.326805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.326833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.336684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.336767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.336792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.336807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.336820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.336847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.346861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.346996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.347021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.347036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.347049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.347076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.356760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.356862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.356897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.356912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.356924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.356953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.366808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.366911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.366937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.366951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.366964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.366992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.376842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.376944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.376970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.376989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.377004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.377033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.386841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.386944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.386970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.386990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.387004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.387034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.396929] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.397015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.397041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.397055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.397068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.397096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.406913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.407033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.407059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.407073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.407086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.407114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.416923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.417009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.417035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.417048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.417062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.417090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.426937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.427065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.427091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.427104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.427117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.427145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.436980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.437070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.437095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.437109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.437121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.437148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.446999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.447089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.447115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.447131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.447144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.447171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.457039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.457123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.457149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.457163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.457175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.457203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.490 [2024-07-14 14:10:06.467068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.490 [2024-07-14 14:10:06.467163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.490 [2024-07-14 14:10:06.467189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.490 [2024-07-14 14:10:06.467202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.490 [2024-07-14 14:10:06.467216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.490 [2024-07-14 14:10:06.467244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.490 qpair failed and we were unable to recover it. 00:34:28.749 [2024-07-14 14:10:06.477121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.749 [2024-07-14 14:10:06.477214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.749 [2024-07-14 14:10:06.477243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.749 [2024-07-14 14:10:06.477257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.749 [2024-07-14 14:10:06.477269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.749 [2024-07-14 14:10:06.477297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.749 qpair failed and we were unable to recover it. 00:34:28.749 [2024-07-14 14:10:06.487109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.749 [2024-07-14 14:10:06.487196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.749 [2024-07-14 14:10:06.487222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.749 [2024-07-14 14:10:06.487237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.749 [2024-07-14 14:10:06.487249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.749 [2024-07-14 14:10:06.487277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.749 qpair failed and we were unable to recover it. 00:34:28.749 [2024-07-14 14:10:06.497179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.749 [2024-07-14 14:10:06.497264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.749 [2024-07-14 14:10:06.497290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.749 [2024-07-14 14:10:06.497304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.749 [2024-07-14 14:10:06.497316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.749 [2024-07-14 14:10:06.497344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.749 qpair failed and we were unable to recover it. 00:34:28.749 [2024-07-14 14:10:06.507271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.749 [2024-07-14 14:10:06.507379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.507405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.507419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.507431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.507459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.517203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.517298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.517324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.517338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.517351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.517379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.527284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.527378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.527404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.527418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.527431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.527459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.537293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.537417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.537444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.537458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.537471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.537499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.547311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.547405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.547430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.547445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.547458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.547486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.557371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.557475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.557501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.557515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.557528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.557555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.567410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.567521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.567563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.567578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.567591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.567618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.577383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.577499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.577524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.577538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.577551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.577579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.587491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.587590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.587616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.587631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.587644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.587671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.597483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.597574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.597599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.597613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.597626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.597654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.607481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.607597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.607623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.607637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.607650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.607684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.617497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.617597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.617624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.617639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.617652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.617680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.627541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.627632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.627657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.627671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.627684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.627711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.637567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.637656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.637681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.637695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.750 [2024-07-14 14:10:06.637708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.750 [2024-07-14 14:10:06.637736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.750 qpair failed and we were unable to recover it. 00:34:28.750 [2024-07-14 14:10:06.647590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.750 [2024-07-14 14:10:06.647674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.750 [2024-07-14 14:10:06.647699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.750 [2024-07-14 14:10:06.647713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.751 [2024-07-14 14:10:06.647726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.751 [2024-07-14 14:10:06.647754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.751 qpair failed and we were unable to recover it. 00:34:28.751 [2024-07-14 14:10:06.657616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.751 [2024-07-14 14:10:06.657705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.751 [2024-07-14 14:10:06.657735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.751 [2024-07-14 14:10:06.657750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.751 [2024-07-14 14:10:06.657763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.751 [2024-07-14 14:10:06.657791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.751 qpair failed and we were unable to recover it. 00:34:28.751 [2024-07-14 14:10:06.667644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.751 [2024-07-14 14:10:06.667772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.751 [2024-07-14 14:10:06.667797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.751 [2024-07-14 14:10:06.667811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.751 [2024-07-14 14:10:06.667824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.751 [2024-07-14 14:10:06.667851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.751 qpair failed and we were unable to recover it. 00:34:28.751 [2024-07-14 14:10:06.677677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.751 [2024-07-14 14:10:06.677765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.751 [2024-07-14 14:10:06.677790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.751 [2024-07-14 14:10:06.677804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.751 [2024-07-14 14:10:06.677817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.751 [2024-07-14 14:10:06.677845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.751 qpair failed and we were unable to recover it. 00:34:28.751 [2024-07-14 14:10:06.687692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.751 [2024-07-14 14:10:06.687776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.751 [2024-07-14 14:10:06.687801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.751 [2024-07-14 14:10:06.687815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.751 [2024-07-14 14:10:06.687828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.751 [2024-07-14 14:10:06.687857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.751 qpair failed and we were unable to recover it. 00:34:28.751 [2024-07-14 14:10:06.697742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.751 [2024-07-14 14:10:06.697837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.751 [2024-07-14 14:10:06.697865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.751 [2024-07-14 14:10:06.697890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.751 [2024-07-14 14:10:06.697906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.751 [2024-07-14 14:10:06.697941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.751 qpair failed and we were unable to recover it. 00:34:28.751 [2024-07-14 14:10:06.707766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.751 [2024-07-14 14:10:06.707856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.751 [2024-07-14 14:10:06.707889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.751 [2024-07-14 14:10:06.707905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.751 [2024-07-14 14:10:06.707918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.751 [2024-07-14 14:10:06.707946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.751 qpair failed and we were unable to recover it. 00:34:28.751 [2024-07-14 14:10:06.717789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.751 [2024-07-14 14:10:06.717890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.751 [2024-07-14 14:10:06.717916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.751 [2024-07-14 14:10:06.717930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.751 [2024-07-14 14:10:06.717943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.751 [2024-07-14 14:10:06.717972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.751 qpair failed and we were unable to recover it. 00:34:28.751 [2024-07-14 14:10:06.727819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.751 [2024-07-14 14:10:06.727923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.751 [2024-07-14 14:10:06.727949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.751 [2024-07-14 14:10:06.727964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.751 [2024-07-14 14:10:06.727977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:28.751 [2024-07-14 14:10:06.728006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.751 qpair failed and we were unable to recover it. 00:34:29.010 [2024-07-14 14:10:06.737852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.010 [2024-07-14 14:10:06.737948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.010 [2024-07-14 14:10:06.737974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.010 [2024-07-14 14:10:06.737988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.010 [2024-07-14 14:10:06.738001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.010 [2024-07-14 14:10:06.738029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.010 qpair failed and we were unable to recover it. 00:34:29.010 [2024-07-14 14:10:06.747927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.010 [2024-07-14 14:10:06.748066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.010 [2024-07-14 14:10:06.748096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.010 [2024-07-14 14:10:06.748111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.010 [2024-07-14 14:10:06.748124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.010 [2024-07-14 14:10:06.748152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.010 qpair failed and we were unable to recover it. 00:34:29.010 [2024-07-14 14:10:06.757913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.010 [2024-07-14 14:10:06.758021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.010 [2024-07-14 14:10:06.758046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.010 [2024-07-14 14:10:06.758060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.010 [2024-07-14 14:10:06.758073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.010 [2024-07-14 14:10:06.758101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.010 qpair failed and we were unable to recover it. 00:34:29.010 [2024-07-14 14:10:06.767967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.010 [2024-07-14 14:10:06.768058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.010 [2024-07-14 14:10:06.768083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.010 [2024-07-14 14:10:06.768098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.010 [2024-07-14 14:10:06.768111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.010 [2024-07-14 14:10:06.768139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.010 qpair failed and we were unable to recover it. 00:34:29.010 [2024-07-14 14:10:06.777979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.010 [2024-07-14 14:10:06.778108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.010 [2024-07-14 14:10:06.778133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.010 [2024-07-14 14:10:06.778147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.010 [2024-07-14 14:10:06.778160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.010 [2024-07-14 14:10:06.778187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.010 qpair failed and we were unable to recover it. 00:34:29.010 [2024-07-14 14:10:06.788000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.010 [2024-07-14 14:10:06.788092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.010 [2024-07-14 14:10:06.788117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.788131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.788149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.788177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.798017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.798119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.798145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.798159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.798172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.798199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.808037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.808125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.808151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.808165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.808178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.808206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.818082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.818176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.818201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.818215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.818227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.818255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.828138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.828239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.828264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.828278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.828291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.828318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.838141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.838238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.838263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.838277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.838290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.838318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.848142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.848243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.848269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.848282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.848295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.848323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.858188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.858278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.858304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.858318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.858331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.858359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.868239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.868330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.868355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.868369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.868382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.868409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.878236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.878326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.878351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.878365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.878386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.878415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.888341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.888466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.888492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.888506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.888519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.888546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.898295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.898380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.898406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.898420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.898433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.898461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.908426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.908542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.908567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.908581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.908595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.908622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.918347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.918438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.918464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.918478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.918491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.918519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.011 qpair failed and we were unable to recover it. 00:34:29.011 [2024-07-14 14:10:06.928368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.011 [2024-07-14 14:10:06.928460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.011 [2024-07-14 14:10:06.928485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.011 [2024-07-14 14:10:06.928499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.011 [2024-07-14 14:10:06.928512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.011 [2024-07-14 14:10:06.928540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.012 qpair failed and we were unable to recover it. 00:34:29.012 [2024-07-14 14:10:06.938397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.012 [2024-07-14 14:10:06.938522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.012 [2024-07-14 14:10:06.938547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.012 [2024-07-14 14:10:06.938561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.012 [2024-07-14 14:10:06.938574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.012 [2024-07-14 14:10:06.938604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.012 qpair failed and we were unable to recover it. 00:34:29.012 [2024-07-14 14:10:06.948507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.012 [2024-07-14 14:10:06.948599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.012 [2024-07-14 14:10:06.948624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.012 [2024-07-14 14:10:06.948638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.012 [2024-07-14 14:10:06.948651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.012 [2024-07-14 14:10:06.948679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.012 qpair failed and we were unable to recover it. 00:34:29.012 [2024-07-14 14:10:06.958436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.012 [2024-07-14 14:10:06.958525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.012 [2024-07-14 14:10:06.958549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.012 [2024-07-14 14:10:06.958563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.012 [2024-07-14 14:10:06.958576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.012 [2024-07-14 14:10:06.958604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.012 qpair failed and we were unable to recover it. 00:34:29.012 [2024-07-14 14:10:06.968586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.012 [2024-07-14 14:10:06.968673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.012 [2024-07-14 14:10:06.968699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.012 [2024-07-14 14:10:06.968712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.012 [2024-07-14 14:10:06.968730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.012 [2024-07-14 14:10:06.968759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.012 qpair failed and we were unable to recover it. 00:34:29.012 [2024-07-14 14:10:06.978597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.012 [2024-07-14 14:10:06.978689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.012 [2024-07-14 14:10:06.978715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.012 [2024-07-14 14:10:06.978728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.012 [2024-07-14 14:10:06.978741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.012 [2024-07-14 14:10:06.978769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.012 qpair failed and we were unable to recover it. 00:34:29.012 [2024-07-14 14:10:06.988547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.012 [2024-07-14 14:10:06.988639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.012 [2024-07-14 14:10:06.988664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.012 [2024-07-14 14:10:06.988677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.012 [2024-07-14 14:10:06.988690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.012 [2024-07-14 14:10:06.988718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.012 qpair failed and we were unable to recover it. 00:34:29.272 [2024-07-14 14:10:06.998574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.272 [2024-07-14 14:10:06.998671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.272 [2024-07-14 14:10:06.998697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.272 [2024-07-14 14:10:06.998711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.272 [2024-07-14 14:10:06.998724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.272 [2024-07-14 14:10:06.998751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.272 qpair failed and we were unable to recover it. 00:34:29.272 [2024-07-14 14:10:07.008625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.272 [2024-07-14 14:10:07.008746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.272 [2024-07-14 14:10:07.008772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.272 [2024-07-14 14:10:07.008786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.272 [2024-07-14 14:10:07.008799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.272 [2024-07-14 14:10:07.008827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.272 qpair failed and we were unable to recover it. 00:34:29.272 [2024-07-14 14:10:07.018622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.272 [2024-07-14 14:10:07.018706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.272 [2024-07-14 14:10:07.018732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.272 [2024-07-14 14:10:07.018746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.272 [2024-07-14 14:10:07.018759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.272 [2024-07-14 14:10:07.018787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.272 qpair failed and we were unable to recover it. 00:34:29.272 [2024-07-14 14:10:07.028669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.272 [2024-07-14 14:10:07.028762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.272 [2024-07-14 14:10:07.028787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.272 [2024-07-14 14:10:07.028801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.272 [2024-07-14 14:10:07.028814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.272 [2024-07-14 14:10:07.028842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.272 qpair failed and we were unable to recover it. 00:34:29.272 [2024-07-14 14:10:07.038683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.272 [2024-07-14 14:10:07.038776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.272 [2024-07-14 14:10:07.038801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.272 [2024-07-14 14:10:07.038815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.272 [2024-07-14 14:10:07.038828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.272 [2024-07-14 14:10:07.038855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.272 qpair failed and we were unable to recover it. 00:34:29.272 [2024-07-14 14:10:07.048715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.272 [2024-07-14 14:10:07.048822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.272 [2024-07-14 14:10:07.048847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.272 [2024-07-14 14:10:07.048861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.272 [2024-07-14 14:10:07.048874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.272 [2024-07-14 14:10:07.048912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.272 qpair failed and we were unable to recover it. 00:34:29.272 [2024-07-14 14:10:07.058743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.272 [2024-07-14 14:10:07.058873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.272 [2024-07-14 14:10:07.058905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.272 [2024-07-14 14:10:07.058925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.272 [2024-07-14 14:10:07.058939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.272 [2024-07-14 14:10:07.058967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.272 qpair failed and we were unable to recover it. 00:34:29.272 [2024-07-14 14:10:07.068804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.272 [2024-07-14 14:10:07.068907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.272 [2024-07-14 14:10:07.068933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.272 [2024-07-14 14:10:07.068947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.272 [2024-07-14 14:10:07.068960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.272 [2024-07-14 14:10:07.068988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.272 qpair failed and we were unable to recover it. 00:34:29.272 [2024-07-14 14:10:07.078813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.272 [2024-07-14 14:10:07.078917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.272 [2024-07-14 14:10:07.078943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.272 [2024-07-14 14:10:07.078957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.272 [2024-07-14 14:10:07.078970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.272 [2024-07-14 14:10:07.078999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.272 qpair failed and we were unable to recover it. 00:34:29.272 [2024-07-14 14:10:07.088922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.272 [2024-07-14 14:10:07.089023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.272 [2024-07-14 14:10:07.089049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.272 [2024-07-14 14:10:07.089063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.272 [2024-07-14 14:10:07.089076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.272 [2024-07-14 14:10:07.089104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.272 qpair failed and we were unable to recover it. 00:34:29.272 [2024-07-14 14:10:07.098847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.098940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.098965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.098979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.098992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.099022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.108910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.109005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.109030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.109045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.109058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.109086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.118919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.119014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.119039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.119053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.119065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.119094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.129028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.129118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.129144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.129158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.129171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.129199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.138961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.139051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.139076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.139090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.139103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.139131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.149028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.149126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.149156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.149177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.149190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.149219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.159021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.159122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.159148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.159163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.159175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.159203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.169063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.169185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.169211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.169226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.169238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.169266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.179126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.179244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.179269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.179283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.179296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.179324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.189121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.189217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.189243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.189257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.189270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.189297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.199188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.199317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.199343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.199357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.199370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.199398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.209246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.209337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.209363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.209377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.209390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.209418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.219223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.219314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.219339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.219353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.219366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.219395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.229368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.229461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.229486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.229500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.273 [2024-07-14 14:10:07.229513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.273 [2024-07-14 14:10:07.229541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.273 qpair failed and we were unable to recover it. 00:34:29.273 [2024-07-14 14:10:07.239254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.273 [2024-07-14 14:10:07.239342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.273 [2024-07-14 14:10:07.239372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.273 [2024-07-14 14:10:07.239387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.274 [2024-07-14 14:10:07.239400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.274 [2024-07-14 14:10:07.239428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.274 qpair failed and we were unable to recover it. 00:34:29.274 [2024-07-14 14:10:07.249285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.274 [2024-07-14 14:10:07.249385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.274 [2024-07-14 14:10:07.249410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.274 [2024-07-14 14:10:07.249424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.274 [2024-07-14 14:10:07.249437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.274 [2024-07-14 14:10:07.249465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.274 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.259324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.259410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.259436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.259450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.259463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.259491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.269354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.269476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.269501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.269515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.269527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.269556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.279365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.279457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.279482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.279497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.279510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.279539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.289402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.289490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.289515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.289529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.289542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.289569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.299436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.299526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.299552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.299565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.299578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.299606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.309443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.309538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.309564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.309578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.309591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.309618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.319486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.319575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.319600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.319614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.319627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.319656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.329542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.329658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.329688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.329703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.329716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.329743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.339523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.339607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.339633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.339647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.339660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.339688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.349572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.349660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.349685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.349699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.349712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.349741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.359590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.359682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.359707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.359721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.359734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.359762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.369599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.369683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.369708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.369722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.369736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.369770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.534 [2024-07-14 14:10:07.379657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.534 [2024-07-14 14:10:07.379748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.534 [2024-07-14 14:10:07.379773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.534 [2024-07-14 14:10:07.379788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.534 [2024-07-14 14:10:07.379801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.534 [2024-07-14 14:10:07.379829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.534 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.389708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.389807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.389832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.389846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.389859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.389896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.399727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.399819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.399844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.399859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.399872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.399911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.409728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.409821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.409847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.409861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.409874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.409912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.419769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.419902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.419933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.419947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.419961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.419989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.429904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.430067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.430093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.430107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.430120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.430148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.439827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.439930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.439956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.439971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.439983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.440010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.449844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.449949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.449975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.449990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.450002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.450031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.459897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.459997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.460022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.460036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.460049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.460082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.469933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.470035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.470064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.470079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.470093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.470122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.479922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.480017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.480042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.480055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.480067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.480096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.490019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.490108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.490134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.490148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.490161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.490189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.500019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.500107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.500132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.500146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.500159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.500186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.535 [2024-07-14 14:10:07.510064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.535 [2024-07-14 14:10:07.510165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.535 [2024-07-14 14:10:07.510195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.535 [2024-07-14 14:10:07.510209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.535 [2024-07-14 14:10:07.510223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.535 [2024-07-14 14:10:07.510250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.535 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.520067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.520155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.520180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.520194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.520207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.520237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.530079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.530172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.530197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.530211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.530224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.530252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.540099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.540193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.540219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.540233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.540247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.540276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.550196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.550303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.550328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.550342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.550360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.550389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.560242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.560334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.560359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.560373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.560386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.560414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.570165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.570257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.570283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.570297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.570309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.570337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.580202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.580324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.580348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.580363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.580377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.580406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.590242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.590333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.590358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.590372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.590385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.590412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.600368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.600466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.600492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.600506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.600519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.600547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.610320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.610427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.610453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.610467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.610479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.610507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.620317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.620403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.620428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.620442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.620455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.620483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.630354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.630491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.796 [2024-07-14 14:10:07.630516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.796 [2024-07-14 14:10:07.630530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.796 [2024-07-14 14:10:07.630543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.796 [2024-07-14 14:10:07.630572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.796 qpair failed and we were unable to recover it. 00:34:29.796 [2024-07-14 14:10:07.640407] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.796 [2024-07-14 14:10:07.640497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.640524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.640539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.640561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.640589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.650411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.650531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.650556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.650570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.650582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.650611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.660441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.660569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.660594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.660609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.660623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.660653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.670480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.670576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.670605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.670620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.670634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.670663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.680516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.680609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.680635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.680649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.680662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.680692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.690524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.690625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.690651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.690665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.690678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.690706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.700547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.700639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.700664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.700678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.700690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.700718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.710590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.710686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.710711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.710726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.710739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.710766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.720690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.720775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.720801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.720814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.720828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.720855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.730661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.730754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.730780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.730794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.730812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.730841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.740667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.740752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.740778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.740792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.740805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.740833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.750706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.750824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.750850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.750864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.750883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.750914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.760737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.760831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.760860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.760885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.760903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.760933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:29.797 [2024-07-14 14:10:07.770772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.797 [2024-07-14 14:10:07.770914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.797 [2024-07-14 14:10:07.770940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.797 [2024-07-14 14:10:07.770954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.797 [2024-07-14 14:10:07.770967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:29.797 [2024-07-14 14:10:07.770995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:29.797 qpair failed and we were unable to recover it. 00:34:30.056 [2024-07-14 14:10:07.780819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.056 [2024-07-14 14:10:07.780914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.056 [2024-07-14 14:10:07.780946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.780960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.780973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.781002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.790834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.790940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.790966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.790980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.790993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.791021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.800830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.800923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.800958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.800974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.800986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.801019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.810948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.811043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.811069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.811083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.811096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.811124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.820913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.821039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.821065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.821085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.821098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.821126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.830934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.831028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.831054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.831068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.831081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.831109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.840961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.841052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.841078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.841092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.841105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.841133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.851023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.851149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.851174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.851188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.851201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.851229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.861035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.861170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.861196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.861209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.861222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.861249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.871072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.871173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.871201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.871219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.871233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.871262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.881092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.881235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.881261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.881275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.881288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.881316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.891096] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.891227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.891252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.891266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.891281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.891308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.901148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.901234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.901259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.901273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.901286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.901314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.911152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.911250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.057 [2024-07-14 14:10:07.911275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.057 [2024-07-14 14:10:07.911295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.057 [2024-07-14 14:10:07.911309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.057 [2024-07-14 14:10:07.911338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.057 qpair failed and we were unable to recover it. 00:34:30.057 [2024-07-14 14:10:07.921170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.057 [2024-07-14 14:10:07.921261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:07.921286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:07.921300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:07.921313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:07.921341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.058 [2024-07-14 14:10:07.931193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.058 [2024-07-14 14:10:07.931297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:07.931322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:07.931336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:07.931350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:07.931377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.058 [2024-07-14 14:10:07.941230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.058 [2024-07-14 14:10:07.941320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:07.941346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:07.941361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:07.941373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:07.941401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.058 [2024-07-14 14:10:07.951330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.058 [2024-07-14 14:10:07.951419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:07.951445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:07.951459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:07.951472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:07.951499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.058 [2024-07-14 14:10:07.961281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.058 [2024-07-14 14:10:07.961369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:07.961395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:07.961409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:07.961422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:07.961449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.058 [2024-07-14 14:10:07.971319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.058 [2024-07-14 14:10:07.971445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:07.971470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:07.971484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:07.971497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:07.971527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.058 [2024-07-14 14:10:07.981452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.058 [2024-07-14 14:10:07.981592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:07.981617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:07.981631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:07.981644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:07.981674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.058 [2024-07-14 14:10:07.991376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.058 [2024-07-14 14:10:07.991471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:07.991496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:07.991510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:07.991523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:07.991553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.058 [2024-07-14 14:10:08.001415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.058 [2024-07-14 14:10:08.001544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:08.001569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:08.001589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:08.001603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:08.001631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.058 [2024-07-14 14:10:08.011531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.058 [2024-07-14 14:10:08.011623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:08.011649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:08.011663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:08.011676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:08.011703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.058 [2024-07-14 14:10:08.021459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.058 [2024-07-14 14:10:08.021555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:08.021583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:08.021599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:08.021612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:08.021640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.058 [2024-07-14 14:10:08.031528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.058 [2024-07-14 14:10:08.031647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.058 [2024-07-14 14:10:08.031673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.058 [2024-07-14 14:10:08.031688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.058 [2024-07-14 14:10:08.031700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.058 [2024-07-14 14:10:08.031729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.058 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.041516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.041605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.041630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.041644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.041657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.041685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.051677] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.051808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.051834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.051848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.051861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.051897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.061602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.061733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.061759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.061773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.061785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.061813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.071612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.071703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.071729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.071743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.071756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.071783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.081689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.081785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.081811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.081826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.081839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.081867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.091688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.091776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.091806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.091821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.091834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.091863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.101731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.101856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.101889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.101905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.101918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.101946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.111735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.111888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.111914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.111928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.111941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.111969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.121774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.121913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.121938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.121952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.121965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.121993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.131890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.132018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.132043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.132056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.132069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.132103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.141831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.141929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.141954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.141969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.141981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.142009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.151852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.151966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.151992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.152006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.152019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.152047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.318 [2024-07-14 14:10:08.161925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.318 [2024-07-14 14:10:08.162014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.318 [2024-07-14 14:10:08.162039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.318 [2024-07-14 14:10:08.162052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.318 [2024-07-14 14:10:08.162065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.318 [2024-07-14 14:10:08.162093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.318 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.171954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.172044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.172070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.172084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.172097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.172125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.181945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.182033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.182063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.182078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.182091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.182119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.191992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.192089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.192114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.192128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.192141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.192169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.202001] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.202108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.202132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.202147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.202159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.202187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.212042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.212170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.212195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.212209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.212222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.212250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.222060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.222146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.222171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.222185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.222198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.222231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.232079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.232171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.232197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.232211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.232224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.232252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.242131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.242230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.242259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.242274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.242287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.242316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.252123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.252216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.252241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.252256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.252269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.252296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.262151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.262250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.262275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.262289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.262302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.262330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.272195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.272290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.272321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.272336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.272349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.272376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.282208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.282293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.282318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.282332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.282344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.282373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.319 [2024-07-14 14:10:08.292258] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.319 [2024-07-14 14:10:08.292352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.319 [2024-07-14 14:10:08.292378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.319 [2024-07-14 14:10:08.292392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.319 [2024-07-14 14:10:08.292405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.319 [2024-07-14 14:10:08.292433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.319 qpair failed and we were unable to recover it. 00:34:30.579 [2024-07-14 14:10:08.302264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.579 [2024-07-14 14:10:08.302394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.579 [2024-07-14 14:10:08.302419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.579 [2024-07-14 14:10:08.302434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.579 [2024-07-14 14:10:08.302447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.579 [2024-07-14 14:10:08.302475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.579 qpair failed and we were unable to recover it. 00:34:30.579 [2024-07-14 14:10:08.312289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.579 [2024-07-14 14:10:08.312378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.579 [2024-07-14 14:10:08.312404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.579 [2024-07-14 14:10:08.312418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.579 [2024-07-14 14:10:08.312431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.579 [2024-07-14 14:10:08.312464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.579 qpair failed and we were unable to recover it. 00:34:30.579 [2024-07-14 14:10:08.322361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.579 [2024-07-14 14:10:08.322482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.579 [2024-07-14 14:10:08.322507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.579 [2024-07-14 14:10:08.322521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.579 [2024-07-14 14:10:08.322535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.579 [2024-07-14 14:10:08.322562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.579 qpair failed and we were unable to recover it. 00:34:30.579 [2024-07-14 14:10:08.332347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.579 [2024-07-14 14:10:08.332433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.579 [2024-07-14 14:10:08.332458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.579 [2024-07-14 14:10:08.332472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.579 [2024-07-14 14:10:08.332485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.579 [2024-07-14 14:10:08.332513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.579 qpair failed and we were unable to recover it. 00:34:30.579 [2024-07-14 14:10:08.342374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.579 [2024-07-14 14:10:08.342462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.579 [2024-07-14 14:10:08.342487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.579 [2024-07-14 14:10:08.342501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.579 [2024-07-14 14:10:08.342514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.579 [2024-07-14 14:10:08.342541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.579 qpair failed and we were unable to recover it. 00:34:30.579 [2024-07-14 14:10:08.352396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.579 [2024-07-14 14:10:08.352493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.579 [2024-07-14 14:10:08.352518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.579 [2024-07-14 14:10:08.352533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.579 [2024-07-14 14:10:08.352545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.579 [2024-07-14 14:10:08.352573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.579 qpair failed and we were unable to recover it. 00:34:30.579 [2024-07-14 14:10:08.362423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.579 [2024-07-14 14:10:08.362552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.579 [2024-07-14 14:10:08.362586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.579 [2024-07-14 14:10:08.362601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.579 [2024-07-14 14:10:08.362614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.579 [2024-07-14 14:10:08.362641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.579 qpair failed and we were unable to recover it. 00:34:30.579 [2024-07-14 14:10:08.372440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.579 [2024-07-14 14:10:08.372533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.579 [2024-07-14 14:10:08.372559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.579 [2024-07-14 14:10:08.372573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.579 [2024-07-14 14:10:08.372586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.579 [2024-07-14 14:10:08.372613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.579 qpair failed and we were unable to recover it. 00:34:30.579 [2024-07-14 14:10:08.382568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.579 [2024-07-14 14:10:08.382652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.579 [2024-07-14 14:10:08.382677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.579 [2024-07-14 14:10:08.382691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.579 [2024-07-14 14:10:08.382704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.579 [2024-07-14 14:10:08.382732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.579 qpair failed and we were unable to recover it. 00:34:30.579 [2024-07-14 14:10:08.392547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.579 [2024-07-14 14:10:08.392640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.579 [2024-07-14 14:10:08.392665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.579 [2024-07-14 14:10:08.392679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.579 [2024-07-14 14:10:08.392693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.392720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.402534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.402658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.402683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.402697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.402715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.402743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.412568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.412684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.412709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.412723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.412736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.412764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.422618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.422729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.422755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.422769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.422782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.422809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.432633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.432723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.432748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.432762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.432775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.432802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.442648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.442780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.442805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.442819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.442831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.442858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.452686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.452783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.452809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.452823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.452836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.452863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.462694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.462779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.462804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.462818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.462831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.462858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.472826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.472928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.472953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.472967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.472980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.473007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.482770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.482858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.482889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.482904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.482916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.482944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.492825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.492932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.492958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.492972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.492990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.493018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.502881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.502990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.503016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.503030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.503043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.503070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.512859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.512959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.512984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.512998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.513011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.513038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.522940] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.523041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.523066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.523080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.523093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.523121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.580 qpair failed and we were unable to recover it. 00:34:30.580 [2024-07-14 14:10:08.532928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.580 [2024-07-14 14:10:08.533017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.580 [2024-07-14 14:10:08.533042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.580 [2024-07-14 14:10:08.533056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.580 [2024-07-14 14:10:08.533069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.580 [2024-07-14 14:10:08.533097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.581 qpair failed and we were unable to recover it. 00:34:30.581 [2024-07-14 14:10:08.542979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.581 [2024-07-14 14:10:08.543074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.581 [2024-07-14 14:10:08.543100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.581 [2024-07-14 14:10:08.543114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.581 [2024-07-14 14:10:08.543126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.581 [2024-07-14 14:10:08.543154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.581 qpair failed and we were unable to recover it. 00:34:30.581 [2024-07-14 14:10:08.553003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.581 [2024-07-14 14:10:08.553098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.581 [2024-07-14 14:10:08.553124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.581 [2024-07-14 14:10:08.553138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.581 [2024-07-14 14:10:08.553151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.581 [2024-07-14 14:10:08.553178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.581 qpair failed and we were unable to recover it. 00:34:30.840 [2024-07-14 14:10:08.563014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-07-14 14:10:08.563107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-07-14 14:10:08.563132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-07-14 14:10:08.563147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-07-14 14:10:08.563160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.840 [2024-07-14 14:10:08.563188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-07-14 14:10:08.573095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-07-14 14:10:08.573198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-07-14 14:10:08.573224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-07-14 14:10:08.573239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-07-14 14:10:08.573252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.840 [2024-07-14 14:10:08.573279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-07-14 14:10:08.583124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-07-14 14:10:08.583216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-07-14 14:10:08.583242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-07-14 14:10:08.583262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-07-14 14:10:08.583276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.840 [2024-07-14 14:10:08.583304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-07-14 14:10:08.593146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-07-14 14:10:08.593260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-07-14 14:10:08.593285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-07-14 14:10:08.593299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-07-14 14:10:08.593311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.840 [2024-07-14 14:10:08.593339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-07-14 14:10:08.603196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-07-14 14:10:08.603325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-07-14 14:10:08.603351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-07-14 14:10:08.603365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-07-14 14:10:08.603378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.840 [2024-07-14 14:10:08.603406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-07-14 14:10:08.613165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-07-14 14:10:08.613249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-07-14 14:10:08.613274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-07-14 14:10:08.613288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-07-14 14:10:08.613301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.840 [2024-07-14 14:10:08.613331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-07-14 14:10:08.623259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-07-14 14:10:08.623352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-07-14 14:10:08.623378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-07-14 14:10:08.623393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-07-14 14:10:08.623406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.840 [2024-07-14 14:10:08.623434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-07-14 14:10:08.633229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-07-14 14:10:08.633351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-07-14 14:10:08.633377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-07-14 14:10:08.633391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-07-14 14:10:08.633405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.840 [2024-07-14 14:10:08.633433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-07-14 14:10:08.643255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-07-14 14:10:08.643346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-07-14 14:10:08.643372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-07-14 14:10:08.643386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-07-14 14:10:08.643399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.840 [2024-07-14 14:10:08.643427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-07-14 14:10:08.653342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.653426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.653452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.653466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.653479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.653507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.663307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.663400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.663426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.663440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.663453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.663481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.673310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.673444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.673469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.673488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.673502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.673531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.683370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.683459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.683485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.683499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.683512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.683540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.693429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.693527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.693552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.693566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.693579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.693609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.703391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.703480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.703505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.703520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.703533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.703562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.713464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.713554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.713580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.713594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.713606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.713634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.723525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.723611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.723637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.723652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.723665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.723693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.733498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.733591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.733618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.733632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.733645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.733672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.743505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.743647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.743672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.743686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.743699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.743729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.753543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.753635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.753661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.753675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.753688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.753716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.763555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.763645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.763670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.763690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.763703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.763731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.773603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.773698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.773723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.773736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.773749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.773777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.783628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.783713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.841 [2024-07-14 14:10:08.783739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.841 [2024-07-14 14:10:08.783752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.841 [2024-07-14 14:10:08.783765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.841 [2024-07-14 14:10:08.783793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.841 qpair failed and we were unable to recover it. 00:34:30.841 [2024-07-14 14:10:08.793698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.841 [2024-07-14 14:10:08.793813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.842 [2024-07-14 14:10:08.793841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.842 [2024-07-14 14:10:08.793857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.842 [2024-07-14 14:10:08.793870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.842 [2024-07-14 14:10:08.793910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.842 qpair failed and we were unable to recover it. 00:34:30.842 [2024-07-14 14:10:08.803732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.842 [2024-07-14 14:10:08.803837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.842 [2024-07-14 14:10:08.803873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.842 [2024-07-14 14:10:08.803899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.842 [2024-07-14 14:10:08.803913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.842 [2024-07-14 14:10:08.803941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.842 qpair failed and we were unable to recover it. 00:34:30.842 [2024-07-14 14:10:08.813699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.842 [2024-07-14 14:10:08.813786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.842 [2024-07-14 14:10:08.813812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.842 [2024-07-14 14:10:08.813826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.842 [2024-07-14 14:10:08.813839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:30.842 [2024-07-14 14:10:08.813867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.842 qpair failed and we were unable to recover it. 00:34:31.100 [2024-07-14 14:10:08.823727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.100 [2024-07-14 14:10:08.823816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.100 [2024-07-14 14:10:08.823842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.823856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.823869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.823907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.833789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.833899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.833925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.833939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.833953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.833983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.843794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.843899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.843925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.843939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.843951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.843980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.853826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.853931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.853962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.853977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.853991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.854019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.863847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.863960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.863986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.864001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.864014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.864042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.873911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.874020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.874044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.874058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.874073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.874102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.883924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.884020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.884045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.884059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.884072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.884100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.893942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.894031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.894055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.894069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.894082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.894110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.904016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.904100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.904124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.904139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.904152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.904180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.914021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.914139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.914172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.914186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.914199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.914227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.924040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.924169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.924198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.924213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.924226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.924255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.934044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.934133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.934159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.934174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.934187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.934216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.944089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.944176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.944207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.944222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.944235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.944263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.954163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.954255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.954281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.954295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.101 [2024-07-14 14:10:08.954308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.101 [2024-07-14 14:10:08.954336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.101 qpair failed and we were unable to recover it. 00:34:31.101 [2024-07-14 14:10:08.964173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-07-14 14:10:08.964264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-07-14 14:10:08.964290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-07-14 14:10:08.964304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:08.964317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:08.964347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-07-14 14:10:08.974168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-07-14 14:10:08.974282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-07-14 14:10:08.974308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-07-14 14:10:08.974322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:08.974335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:08.974362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-07-14 14:10:08.984198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-07-14 14:10:08.984282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-07-14 14:10:08.984307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-07-14 14:10:08.984321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:08.984334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:08.984368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-07-14 14:10:08.994275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-07-14 14:10:08.994366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-07-14 14:10:08.994391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-07-14 14:10:08.994406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:08.994419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:08.994447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-07-14 14:10:09.004240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-07-14 14:10:09.004326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-07-14 14:10:09.004352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-07-14 14:10:09.004366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:09.004379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:09.004406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-07-14 14:10:09.014270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-07-14 14:10:09.014383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-07-14 14:10:09.014408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-07-14 14:10:09.014422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:09.014435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:09.014462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-07-14 14:10:09.024322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-07-14 14:10:09.024422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-07-14 14:10:09.024447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-07-14 14:10:09.024461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:09.024474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:09.024502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-07-14 14:10:09.034377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-07-14 14:10:09.034488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-07-14 14:10:09.034517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-07-14 14:10:09.034532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:09.034545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:09.034573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-07-14 14:10:09.044368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-07-14 14:10:09.044456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-07-14 14:10:09.044481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-07-14 14:10:09.044495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:09.044508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:09.044535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-07-14 14:10:09.054410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-07-14 14:10:09.054500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-07-14 14:10:09.054526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-07-14 14:10:09.054540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:09.054553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:09.054581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-07-14 14:10:09.064445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-07-14 14:10:09.064533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-07-14 14:10:09.064558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-07-14 14:10:09.064572] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:09.064585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:09.064613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-07-14 14:10:09.074461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-07-14 14:10:09.074555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-07-14 14:10:09.074579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-07-14 14:10:09.074593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-07-14 14:10:09.074606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.102 [2024-07-14 14:10:09.074640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.084482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.084570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.084595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.084609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.084622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.084649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.094535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.094670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.094696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.094711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.094724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.094753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.104563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.104696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.104721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.104736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.104749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.104777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.114608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.114705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.114731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.114745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.114758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.114786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.124617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.124706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.124740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.124755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.124767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.124795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.134639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.134724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.134749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.134763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.134776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.134804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.144656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.144746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.144772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.144786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.144799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.144826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.154699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.154794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.154819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.154833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.154846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.154874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.164833] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.164968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.164993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.165007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.165026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.165055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.174742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.174864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.174897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.174911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.174924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.174952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.184762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.184850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.184882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.184899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.184913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.184942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.194815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.194915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.194940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.363 [2024-07-14 14:10:09.194955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.363 [2024-07-14 14:10:09.194968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.363 [2024-07-14 14:10:09.194996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.363 qpair failed and we were unable to recover it. 00:34:31.363 [2024-07-14 14:10:09.204839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.363 [2024-07-14 14:10:09.204945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.363 [2024-07-14 14:10:09.204971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.204984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.204997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.205025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.214907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.215009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.215034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.215049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.215062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.215090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.224890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.224979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.225005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.225019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.225032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.225060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.234933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.235028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.235053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.235068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.235081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.235108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.244983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.245081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.245106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.245120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.245133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.245161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.254967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.255050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.255075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.255089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.255124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.255154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.265019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.265114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.265139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.265153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.265166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.265194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.275046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.275141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.275166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.275180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.275193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.275222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.285058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.285149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.285174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.285188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.285201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.285228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.295083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.295175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.295201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.295215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.295228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.295256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.305124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.305220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.305246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.305260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.305273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.305300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.315262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.315356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.315381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.315396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.315408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.315436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.325176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.325264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.325289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.325304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.325317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.325344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.364 [2024-07-14 14:10:09.335224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.364 [2024-07-14 14:10:09.335316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.364 [2024-07-14 14:10:09.335341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.364 [2024-07-14 14:10:09.335355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.364 [2024-07-14 14:10:09.335368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.364 [2024-07-14 14:10:09.335397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.364 qpair failed and we were unable to recover it. 00:34:31.625 [2024-07-14 14:10:09.345227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.625 [2024-07-14 14:10:09.345318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.625 [2024-07-14 14:10:09.345344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.625 [2024-07-14 14:10:09.345358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.625 [2024-07-14 14:10:09.345377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.625 [2024-07-14 14:10:09.345405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.625 qpair failed and we were unable to recover it. 00:34:31.625 [2024-07-14 14:10:09.355354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.625 [2024-07-14 14:10:09.355448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.625 [2024-07-14 14:10:09.355474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.625 [2024-07-14 14:10:09.355488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.625 [2024-07-14 14:10:09.355501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.625 [2024-07-14 14:10:09.355528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.625 qpair failed and we were unable to recover it. 00:34:31.625 [2024-07-14 14:10:09.365268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.625 [2024-07-14 14:10:09.365357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.625 [2024-07-14 14:10:09.365382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.625 [2024-07-14 14:10:09.365396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.625 [2024-07-14 14:10:09.365409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.625 [2024-07-14 14:10:09.365436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.625 qpair failed and we were unable to recover it. 00:34:31.625 [2024-07-14 14:10:09.375302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.625 [2024-07-14 14:10:09.375387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.625 [2024-07-14 14:10:09.375412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.625 [2024-07-14 14:10:09.375426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.625 [2024-07-14 14:10:09.375439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.625 [2024-07-14 14:10:09.375468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.625 qpair failed and we were unable to recover it. 00:34:31.625 [2024-07-14 14:10:09.385368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.625 [2024-07-14 14:10:09.385462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.625 [2024-07-14 14:10:09.385488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.625 [2024-07-14 14:10:09.385502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.625 [2024-07-14 14:10:09.385515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.625 [2024-07-14 14:10:09.385542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.625 qpair failed and we were unable to recover it. 00:34:31.625 [2024-07-14 14:10:09.395388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.625 [2024-07-14 14:10:09.395479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.625 [2024-07-14 14:10:09.395504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.625 [2024-07-14 14:10:09.395517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.625 [2024-07-14 14:10:09.395531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.625 [2024-07-14 14:10:09.395559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.625 qpair failed and we were unable to recover it. 00:34:31.625 [2024-07-14 14:10:09.405384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.625 [2024-07-14 14:10:09.405481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.625 [2024-07-14 14:10:09.405507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.625 [2024-07-14 14:10:09.405521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.625 [2024-07-14 14:10:09.405533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.625 [2024-07-14 14:10:09.405562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.625 qpair failed and we were unable to recover it. 00:34:31.625 [2024-07-14 14:10:09.415460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.625 [2024-07-14 14:10:09.415573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.625 [2024-07-14 14:10:09.415598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.625 [2024-07-14 14:10:09.415612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.625 [2024-07-14 14:10:09.415625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.625 [2024-07-14 14:10:09.415652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.625 qpair failed and we were unable to recover it. 00:34:31.625 [2024-07-14 14:10:09.425445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.625 [2024-07-14 14:10:09.425531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.625 [2024-07-14 14:10:09.425556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.425570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.425584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.425611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.435492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.435588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.435612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.435632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.435646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.435676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.445524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.445611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.445636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.445650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.445662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.445689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.455549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.455640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.455665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.455679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.455692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.455720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.465600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.465691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.465717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.465731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.465744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.465772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.475699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.475792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.475817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.475831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.475843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.475871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.485648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.485744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.485768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.485782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.485794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.485821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.495749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.495897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.495923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.495936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.495949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.495977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.505678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.505765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.505791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.505805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.505819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.505848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.515729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.515831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.515856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.515870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.515891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.515921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.525734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.525828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.525853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.525874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.525896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.525925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.535767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.535895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.535924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.535939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.535952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.535981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.545789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.545889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.545916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.545930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.545943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.545971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.555823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.555924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.555950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.555964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.555977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.626 [2024-07-14 14:10:09.556005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.626 qpair failed and we were unable to recover it. 00:34:31.626 [2024-07-14 14:10:09.565859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.626 [2024-07-14 14:10:09.565965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.626 [2024-07-14 14:10:09.565991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.626 [2024-07-14 14:10:09.566005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.626 [2024-07-14 14:10:09.566018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.627 [2024-07-14 14:10:09.566046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.627 qpair failed and we were unable to recover it. 00:34:31.627 [2024-07-14 14:10:09.575866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.627 [2024-07-14 14:10:09.575961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.627 [2024-07-14 14:10:09.575987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.627 [2024-07-14 14:10:09.576001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.627 [2024-07-14 14:10:09.576014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.627 [2024-07-14 14:10:09.576042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.627 qpair failed and we were unable to recover it. 00:34:31.627 [2024-07-14 14:10:09.585901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.627 [2024-07-14 14:10:09.585985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.627 [2024-07-14 14:10:09.586011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.627 [2024-07-14 14:10:09.586025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.627 [2024-07-14 14:10:09.586039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.627 [2024-07-14 14:10:09.586067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.627 qpair failed and we were unable to recover it. 00:34:31.627 [2024-07-14 14:10:09.595942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.627 [2024-07-14 14:10:09.596037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.627 [2024-07-14 14:10:09.596061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.627 [2024-07-14 14:10:09.596075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.627 [2024-07-14 14:10:09.596089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.627 [2024-07-14 14:10:09.596116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.627 qpair failed and we were unable to recover it. 00:34:31.627 [2024-07-14 14:10:09.606013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.627 [2024-07-14 14:10:09.606127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.627 [2024-07-14 14:10:09.606152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.627 [2024-07-14 14:10:09.606166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.627 [2024-07-14 14:10:09.606179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.627 [2024-07-14 14:10:09.606209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.627 qpair failed and we were unable to recover it. 00:34:31.888 [2024-07-14 14:10:09.615973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.888 [2024-07-14 14:10:09.616060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.888 [2024-07-14 14:10:09.616091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.888 [2024-07-14 14:10:09.616106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.888 [2024-07-14 14:10:09.616119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.888 [2024-07-14 14:10:09.616147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.888 qpair failed and we were unable to recover it. 00:34:31.888 [2024-07-14 14:10:09.626036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.888 [2024-07-14 14:10:09.626130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.888 [2024-07-14 14:10:09.626155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.888 [2024-07-14 14:10:09.626169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.888 [2024-07-14 14:10:09.626182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.888 [2024-07-14 14:10:09.626209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.888 qpair failed and we were unable to recover it. 00:34:31.888 [2024-07-14 14:10:09.636184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.888 [2024-07-14 14:10:09.636318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.888 [2024-07-14 14:10:09.636343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.888 [2024-07-14 14:10:09.636357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.888 [2024-07-14 14:10:09.636370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.888 [2024-07-14 14:10:09.636397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.888 qpair failed and we were unable to recover it. 00:34:31.888 [2024-07-14 14:10:09.646072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.888 [2024-07-14 14:10:09.646209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.888 [2024-07-14 14:10:09.646234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.888 [2024-07-14 14:10:09.646248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.888 [2024-07-14 14:10:09.646261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.888 [2024-07-14 14:10:09.646288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.888 qpair failed and we were unable to recover it. 00:34:31.888 [2024-07-14 14:10:09.656137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.888 [2024-07-14 14:10:09.656225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.888 [2024-07-14 14:10:09.656250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.888 [2024-07-14 14:10:09.656264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.888 [2024-07-14 14:10:09.656277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.888 [2024-07-14 14:10:09.656304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.888 qpair failed and we were unable to recover it. 00:34:31.888 [2024-07-14 14:10:09.666163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.888 [2024-07-14 14:10:09.666294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.888 [2024-07-14 14:10:09.666320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.888 [2024-07-14 14:10:09.666334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.888 [2024-07-14 14:10:09.666346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.888 [2024-07-14 14:10:09.666374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.888 qpair failed and we were unable to recover it. 00:34:31.888 [2024-07-14 14:10:09.676259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.888 [2024-07-14 14:10:09.676360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.888 [2024-07-14 14:10:09.676385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.888 [2024-07-14 14:10:09.676398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.888 [2024-07-14 14:10:09.676411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.888 [2024-07-14 14:10:09.676439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.888 qpair failed and we were unable to recover it. 00:34:31.888 [2024-07-14 14:10:09.686186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.888 [2024-07-14 14:10:09.686302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.888 [2024-07-14 14:10:09.686327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.888 [2024-07-14 14:10:09.686341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.888 [2024-07-14 14:10:09.686354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.888 [2024-07-14 14:10:09.686382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.888 qpair failed and we were unable to recover it. 00:34:31.888 [2024-07-14 14:10:09.696198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.888 [2024-07-14 14:10:09.696283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.888 [2024-07-14 14:10:09.696308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.888 [2024-07-14 14:10:09.696323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.888 [2024-07-14 14:10:09.696336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.888 [2024-07-14 14:10:09.696363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.888 qpair failed and we were unable to recover it. 00:34:31.888 [2024-07-14 14:10:09.706241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.888 [2024-07-14 14:10:09.706324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.888 [2024-07-14 14:10:09.706355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.888 [2024-07-14 14:10:09.706370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.888 [2024-07-14 14:10:09.706383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.706411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.716304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.716422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.716447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.716462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.716475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.716504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.726303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.726393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.726419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.726433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.726446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.726474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.736324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.736416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.736442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.736456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.736469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.736497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.746486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.746625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.746650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.746664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.746677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.746711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.756386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.756484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.756509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.756522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.756535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.756563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.766440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.766535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.766560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.766574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.766587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.766616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.776457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.776569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.776595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.776609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.776622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.776650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.786468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.786558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.786584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.786599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.786612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.786640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.796564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.796660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.796693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.796712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.796726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.796755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.806523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.806615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.806641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.806656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.806669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.806697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.816607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.816694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.816720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.816734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.816747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.816775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.826593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.826705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.826731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.826745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.826758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.826786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.836716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.836806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.836832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.836846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.836859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.836899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.846655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.889 [2024-07-14 14:10:09.846746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.889 [2024-07-14 14:10:09.846772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.889 [2024-07-14 14:10:09.846786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.889 [2024-07-14 14:10:09.846799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.889 [2024-07-14 14:10:09.846826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.889 qpair failed and we were unable to recover it. 00:34:31.889 [2024-07-14 14:10:09.856704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.890 [2024-07-14 14:10:09.856795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.890 [2024-07-14 14:10:09.856821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.890 [2024-07-14 14:10:09.856836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.890 [2024-07-14 14:10:09.856849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.890 [2024-07-14 14:10:09.856886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.890 qpair failed and we were unable to recover it. 00:34:31.890 [2024-07-14 14:10:09.866715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.890 [2024-07-14 14:10:09.866802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.890 [2024-07-14 14:10:09.866828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.890 [2024-07-14 14:10:09.866842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.890 [2024-07-14 14:10:09.866855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:31.890 [2024-07-14 14:10:09.866892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.890 qpair failed and we were unable to recover it. 00:34:32.151 [2024-07-14 14:10:09.876748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.151 [2024-07-14 14:10:09.876841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.151 [2024-07-14 14:10:09.876867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.151 [2024-07-14 14:10:09.876889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.151 [2024-07-14 14:10:09.876904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.151 [2024-07-14 14:10:09.876932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.151 qpair failed and we were unable to recover it. 00:34:32.151 [2024-07-14 14:10:09.886848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.886944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.886978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.886993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.887006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.887035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:09.896883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.896986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.897012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.897026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.897039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.897067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:09.906909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.907006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.907032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.907046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.907060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.907088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:09.916856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.916953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.916979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.916993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.917006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.917034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:09.926935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.927038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.927063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.927077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.927095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.927123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:09.936914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.937001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.937027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.937042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.937055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.937083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:09.946925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.947050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.947076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.947090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.947104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.947132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:09.956961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.957055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.957081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.957095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.957108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.957136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:09.966966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.967056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.967082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.967096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.967109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.967137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:09.977017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.977104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.977130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.977144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.977157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.977185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:09.987034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.987121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.987146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.987160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.987173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.987201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:09.997058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:09.997158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:09.997184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:09.997198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:09.997211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:09.997239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:10.007163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:10.007269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:10.007298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:10.007313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:10.007327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:10.007357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:10.017137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:10.017232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:10.017265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.152 [2024-07-14 14:10:10.017280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.152 [2024-07-14 14:10:10.017299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.152 [2024-07-14 14:10:10.017328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.152 qpair failed and we were unable to recover it. 00:34:32.152 [2024-07-14 14:10:10.027176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.152 [2024-07-14 14:10:10.027266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.152 [2024-07-14 14:10:10.027292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.153 [2024-07-14 14:10:10.027306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.153 [2024-07-14 14:10:10.027319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.153 [2024-07-14 14:10:10.027347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.153 qpair failed and we were unable to recover it. 00:34:32.153 [2024-07-14 14:10:10.037307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.153 [2024-07-14 14:10:10.037411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.153 [2024-07-14 14:10:10.037443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.153 [2024-07-14 14:10:10.037457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.153 [2024-07-14 14:10:10.037471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.153 [2024-07-14 14:10:10.037498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.153 qpair failed and we were unable to recover it. 00:34:32.153 [2024-07-14 14:10:10.047227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.153 [2024-07-14 14:10:10.047317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.153 [2024-07-14 14:10:10.047342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.153 [2024-07-14 14:10:10.047357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.153 [2024-07-14 14:10:10.047370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.153 [2024-07-14 14:10:10.047399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.153 qpair failed and we were unable to recover it. 00:34:32.153 [2024-07-14 14:10:10.057236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.153 [2024-07-14 14:10:10.057368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.153 [2024-07-14 14:10:10.057397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.153 [2024-07-14 14:10:10.057414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.153 [2024-07-14 14:10:10.057427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.153 [2024-07-14 14:10:10.057456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.153 qpair failed and we were unable to recover it. 00:34:32.153 [2024-07-14 14:10:10.067325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.153 [2024-07-14 14:10:10.067445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.153 [2024-07-14 14:10:10.067474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.153 [2024-07-14 14:10:10.067489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.153 [2024-07-14 14:10:10.067503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.153 [2024-07-14 14:10:10.067533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.153 qpair failed and we were unable to recover it. 00:34:32.153 [2024-07-14 14:10:10.077307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.153 [2024-07-14 14:10:10.077437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.153 [2024-07-14 14:10:10.077462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.153 [2024-07-14 14:10:10.077476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.153 [2024-07-14 14:10:10.077490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.153 [2024-07-14 14:10:10.077518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.153 qpair failed and we were unable to recover it. 00:34:32.153 [2024-07-14 14:10:10.087343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.153 [2024-07-14 14:10:10.087463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.153 [2024-07-14 14:10:10.087488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.153 [2024-07-14 14:10:10.087503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.153 [2024-07-14 14:10:10.087516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.153 [2024-07-14 14:10:10.087546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.153 qpair failed and we were unable to recover it. 00:34:32.153 [2024-07-14 14:10:10.097351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.153 [2024-07-14 14:10:10.097487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.153 [2024-07-14 14:10:10.097513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.153 [2024-07-14 14:10:10.097528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.153 [2024-07-14 14:10:10.097541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.153 [2024-07-14 14:10:10.097568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.153 qpair failed and we were unable to recover it. 00:34:32.153 [2024-07-14 14:10:10.107399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.153 [2024-07-14 14:10:10.107491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.153 [2024-07-14 14:10:10.107517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.153 [2024-07-14 14:10:10.107531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.153 [2024-07-14 14:10:10.107550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.153 [2024-07-14 14:10:10.107579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.153 qpair failed and we were unable to recover it. 00:34:32.153 [2024-07-14 14:10:10.117413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.153 [2024-07-14 14:10:10.117506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.153 [2024-07-14 14:10:10.117531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.153 [2024-07-14 14:10:10.117545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.153 [2024-07-14 14:10:10.117559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.153 [2024-07-14 14:10:10.117587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.153 qpair failed and we were unable to recover it. 00:34:32.153 [2024-07-14 14:10:10.127477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.153 [2024-07-14 14:10:10.127572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.153 [2024-07-14 14:10:10.127597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.153 [2024-07-14 14:10:10.127611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.153 [2024-07-14 14:10:10.127624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.153 [2024-07-14 14:10:10.127652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.153 qpair failed and we were unable to recover it. 00:34:32.413 [2024-07-14 14:10:10.137453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.413 [2024-07-14 14:10:10.137542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.413 [2024-07-14 14:10:10.137567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.413 [2024-07-14 14:10:10.137581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.413 [2024-07-14 14:10:10.137595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.413 [2024-07-14 14:10:10.137622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.413 qpair failed and we were unable to recover it. 00:34:32.413 [2024-07-14 14:10:10.147490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.413 [2024-07-14 14:10:10.147582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.413 [2024-07-14 14:10:10.147606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.413 [2024-07-14 14:10:10.147620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.413 [2024-07-14 14:10:10.147634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.413 [2024-07-14 14:10:10.147661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.413 qpair failed and we were unable to recover it. 00:34:32.413 [2024-07-14 14:10:10.157534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.413 [2024-07-14 14:10:10.157663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.413 [2024-07-14 14:10:10.157688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.413 [2024-07-14 14:10:10.157703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.413 [2024-07-14 14:10:10.157716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.413 [2024-07-14 14:10:10.157743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.413 qpair failed and we were unable to recover it. 00:34:32.413 [2024-07-14 14:10:10.167531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.413 [2024-07-14 14:10:10.167627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.413 [2024-07-14 14:10:10.167652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.413 [2024-07-14 14:10:10.167666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.413 [2024-07-14 14:10:10.167680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.413 [2024-07-14 14:10:10.167707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.413 qpair failed and we were unable to recover it. 00:34:32.413 [2024-07-14 14:10:10.177572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.413 [2024-07-14 14:10:10.177672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.413 [2024-07-14 14:10:10.177697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.413 [2024-07-14 14:10:10.177711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.413 [2024-07-14 14:10:10.177724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.413 [2024-07-14 14:10:10.177752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.413 qpair failed and we were unable to recover it. 00:34:32.413 [2024-07-14 14:10:10.187642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.413 [2024-07-14 14:10:10.187760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.413 [2024-07-14 14:10:10.187785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.413 [2024-07-14 14:10:10.187799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.187811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.187839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.197662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.197774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.197799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.197819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.197833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.197861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.207754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.207899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.207925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.207939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.207953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.207981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.217727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.217842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.217867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.217892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.217907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.217935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.227781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.227873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.227907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.227922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.227935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.227963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.237868] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.237999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.238025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.238039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.238052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.238080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.247829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.247942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.247968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.247982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.247995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.248023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.257832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.257956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.257982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.257996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.258008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.258037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.267916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.268006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.268031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.268046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.268059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.268087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.277996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.278094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.278119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.278132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.278145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.278174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.287895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.287984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.288009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.288028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.288042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.288073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.297935] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.298028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.298053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.298067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.298080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.298110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.307965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.308087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.308112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.308126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.308139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.308166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.318034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.414 [2024-07-14 14:10:10.318176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.414 [2024-07-14 14:10:10.318202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.414 [2024-07-14 14:10:10.318216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.414 [2024-07-14 14:10:10.318228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.414 [2024-07-14 14:10:10.318256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.414 qpair failed and we were unable to recover it. 00:34:32.414 [2024-07-14 14:10:10.328041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.415 [2024-07-14 14:10:10.328133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.415 [2024-07-14 14:10:10.328158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.415 [2024-07-14 14:10:10.328172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.415 [2024-07-14 14:10:10.328185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.415 [2024-07-14 14:10:10.328213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.415 qpair failed and we were unable to recover it. 00:34:32.415 [2024-07-14 14:10:10.338116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.415 [2024-07-14 14:10:10.338245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.415 [2024-07-14 14:10:10.338270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.415 [2024-07-14 14:10:10.338284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.415 [2024-07-14 14:10:10.338297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.415 [2024-07-14 14:10:10.338325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.415 qpair failed and we were unable to recover it. 00:34:32.415 [2024-07-14 14:10:10.348111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.415 [2024-07-14 14:10:10.348203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.415 [2024-07-14 14:10:10.348228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.415 [2024-07-14 14:10:10.348242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.415 [2024-07-14 14:10:10.348255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.415 [2024-07-14 14:10:10.348283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.415 qpair failed and we were unable to recover it. 00:34:32.415 [2024-07-14 14:10:10.358119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.415 [2024-07-14 14:10:10.358213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.415 [2024-07-14 14:10:10.358238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.415 [2024-07-14 14:10:10.358252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.415 [2024-07-14 14:10:10.358265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.415 [2024-07-14 14:10:10.358292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.415 qpair failed and we were unable to recover it. 00:34:32.415 [2024-07-14 14:10:10.368111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.415 [2024-07-14 14:10:10.368232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.415 [2024-07-14 14:10:10.368257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.415 [2024-07-14 14:10:10.368271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.415 [2024-07-14 14:10:10.368285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.415 [2024-07-14 14:10:10.368312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.415 qpair failed and we were unable to recover it. 00:34:32.415 [2024-07-14 14:10:10.378217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.415 [2024-07-14 14:10:10.378323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.415 [2024-07-14 14:10:10.378348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.415 [2024-07-14 14:10:10.378368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.415 [2024-07-14 14:10:10.378381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.415 [2024-07-14 14:10:10.378410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.415 qpair failed and we were unable to recover it. 00:34:32.415 [2024-07-14 14:10:10.388142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.415 [2024-07-14 14:10:10.388227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.415 [2024-07-14 14:10:10.388252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.415 [2024-07-14 14:10:10.388265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.415 [2024-07-14 14:10:10.388278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.415 [2024-07-14 14:10:10.388307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.415 qpair failed and we were unable to recover it. 00:34:32.674 [2024-07-14 14:10:10.398197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.674 [2024-07-14 14:10:10.398290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.674 [2024-07-14 14:10:10.398315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.674 [2024-07-14 14:10:10.398329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.674 [2024-07-14 14:10:10.398342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.674 [2024-07-14 14:10:10.398370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.674 qpair failed and we were unable to recover it. 00:34:32.674 [2024-07-14 14:10:10.408203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.674 [2024-07-14 14:10:10.408290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.674 [2024-07-14 14:10:10.408314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.674 [2024-07-14 14:10:10.408328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.674 [2024-07-14 14:10:10.408341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.674 [2024-07-14 14:10:10.408370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.674 qpair failed and we were unable to recover it. 00:34:32.674 [2024-07-14 14:10:10.418242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.674 [2024-07-14 14:10:10.418324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.674 [2024-07-14 14:10:10.418349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.674 [2024-07-14 14:10:10.418362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.674 [2024-07-14 14:10:10.418376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.674 [2024-07-14 14:10:10.418403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.674 qpair failed and we were unable to recover it. 00:34:32.674 [2024-07-14 14:10:10.428268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.674 [2024-07-14 14:10:10.428371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.674 [2024-07-14 14:10:10.428396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.674 [2024-07-14 14:10:10.428410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.674 [2024-07-14 14:10:10.428423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.674 [2024-07-14 14:10:10.428450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.674 qpair failed and we were unable to recover it. 00:34:32.674 [2024-07-14 14:10:10.438307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.674 [2024-07-14 14:10:10.438399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.674 [2024-07-14 14:10:10.438424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.674 [2024-07-14 14:10:10.438438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.674 [2024-07-14 14:10:10.438452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.674 [2024-07-14 14:10:10.438479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.674 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.448404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.448494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.448519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.448533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.448545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.448573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.458357] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.458442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.458467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.458482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.458495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.458522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.468389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.468473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.468504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.468519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.468532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.468560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.478432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.478519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.478544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.478558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.478571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.478598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.488465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.488599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.488623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.488636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.488648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.488675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.498477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.498569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.498595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.498609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.498622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.498649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.508495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.508586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.508611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.508625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.508639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.508672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.518542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.518637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.518662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.518676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.518689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.518717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.528562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.528660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.528686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.528700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.528713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.528740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.538604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.538695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.538720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.538734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.538748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.538775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.548627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.548759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.548784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.548798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.548811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.548839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.558657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.558749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.558779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.558794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.558807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.558835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.568668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.568755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.568780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.568794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.568806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.568834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.578738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.578856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.675 [2024-07-14 14:10:10.578896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.675 [2024-07-14 14:10:10.578914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.675 [2024-07-14 14:10:10.578927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.675 [2024-07-14 14:10:10.578958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.675 qpair failed and we were unable to recover it. 00:34:32.675 [2024-07-14 14:10:10.588754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.675 [2024-07-14 14:10:10.588839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.676 [2024-07-14 14:10:10.588865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.676 [2024-07-14 14:10:10.588888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.676 [2024-07-14 14:10:10.588903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.676 [2024-07-14 14:10:10.588931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.676 qpair failed and we were unable to recover it. 00:34:32.676 [2024-07-14 14:10:10.598760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.676 [2024-07-14 14:10:10.598854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.676 [2024-07-14 14:10:10.598887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.676 [2024-07-14 14:10:10.598904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.676 [2024-07-14 14:10:10.598918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.676 [2024-07-14 14:10:10.598951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.676 qpair failed and we were unable to recover it. 00:34:32.676 [2024-07-14 14:10:10.608815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.676 [2024-07-14 14:10:10.608914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.676 [2024-07-14 14:10:10.608949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.676 [2024-07-14 14:10:10.608964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.676 [2024-07-14 14:10:10.608977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.676 [2024-07-14 14:10:10.609006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.676 qpair failed and we were unable to recover it. 00:34:32.676 [2024-07-14 14:10:10.618818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.676 [2024-07-14 14:10:10.618915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.676 [2024-07-14 14:10:10.618940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.676 [2024-07-14 14:10:10.618954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.676 [2024-07-14 14:10:10.618967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.676 [2024-07-14 14:10:10.618995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.676 qpair failed and we were unable to recover it. 00:34:32.676 [2024-07-14 14:10:10.628845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.676 [2024-07-14 14:10:10.628958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.676 [2024-07-14 14:10:10.628983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.676 [2024-07-14 14:10:10.628998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.676 [2024-07-14 14:10:10.629011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.676 [2024-07-14 14:10:10.629038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.676 qpair failed and we were unable to recover it. 00:34:32.676 [2024-07-14 14:10:10.638893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.676 [2024-07-14 14:10:10.639021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.676 [2024-07-14 14:10:10.639046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.676 [2024-07-14 14:10:10.639060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.676 [2024-07-14 14:10:10.639074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.676 [2024-07-14 14:10:10.639101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.676 qpair failed and we were unable to recover it. 00:34:32.676 [2024-07-14 14:10:10.648907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.676 [2024-07-14 14:10:10.649035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.676 [2024-07-14 14:10:10.649069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.676 [2024-07-14 14:10:10.649084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.676 [2024-07-14 14:10:10.649097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.676 [2024-07-14 14:10:10.649125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.676 qpair failed and we were unable to recover it. 00:34:32.934 [2024-07-14 14:10:10.658977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.934 [2024-07-14 14:10:10.659078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.934 [2024-07-14 14:10:10.659103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.934 [2024-07-14 14:10:10.659118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.934 [2024-07-14 14:10:10.659131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.934 [2024-07-14 14:10:10.659158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.934 qpair failed and we were unable to recover it. 00:34:32.934 [2024-07-14 14:10:10.668994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.934 [2024-07-14 14:10:10.669082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.934 [2024-07-14 14:10:10.669108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.934 [2024-07-14 14:10:10.669122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.934 [2024-07-14 14:10:10.669135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.934 [2024-07-14 14:10:10.669163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.935 qpair failed and we were unable to recover it. 00:34:32.935 [2024-07-14 14:10:10.679012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.935 [2024-07-14 14:10:10.679110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.935 [2024-07-14 14:10:10.679135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.935 [2024-07-14 14:10:10.679149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.935 [2024-07-14 14:10:10.679162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.935 [2024-07-14 14:10:10.679190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.935 qpair failed and we were unable to recover it. 00:34:32.935 [2024-07-14 14:10:10.689040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.935 [2024-07-14 14:10:10.689173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.935 [2024-07-14 14:10:10.689197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.935 [2024-07-14 14:10:10.689212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.935 [2024-07-14 14:10:10.689225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.935 [2024-07-14 14:10:10.689258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.935 qpair failed and we were unable to recover it. 00:34:32.935 [2024-07-14 14:10:10.699053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.935 [2024-07-14 14:10:10.699142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.935 [2024-07-14 14:10:10.699168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.935 [2024-07-14 14:10:10.699182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.935 [2024-07-14 14:10:10.699195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.935 [2024-07-14 14:10:10.699222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.935 qpair failed and we were unable to recover it. 00:34:32.935 [2024-07-14 14:10:10.709086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.935 [2024-07-14 14:10:10.709185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.935 [2024-07-14 14:10:10.709211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.935 [2024-07-14 14:10:10.709224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.935 [2024-07-14 14:10:10.709237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.935 [2024-07-14 14:10:10.709264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.935 qpair failed and we were unable to recover it. 00:34:32.935 [2024-07-14 14:10:10.719112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.935 [2024-07-14 14:10:10.719201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.935 [2024-07-14 14:10:10.719227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.935 [2024-07-14 14:10:10.719241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.935 [2024-07-14 14:10:10.719253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.935 [2024-07-14 14:10:10.719280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.935 qpair failed and we were unable to recover it. 00:34:32.935 [2024-07-14 14:10:10.729243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.935 [2024-07-14 14:10:10.729340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.935 [2024-07-14 14:10:10.729365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.935 [2024-07-14 14:10:10.729378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.935 [2024-07-14 14:10:10.729391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ce0840 00:34:32.935 [2024-07-14 14:10:10.729419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.935 qpair failed and we were unable to recover it. 00:34:32.935 [2024-07-14 14:10:10.729567] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:32.935 A controller has encountered a failure and is being reset. 00:34:32.935 Controller properly reset. 00:34:32.935 Initializing NVMe Controllers 00:34:32.935 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:32.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:32.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:32.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:32.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:32.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:32.935 Initialization complete. Launching workers. 00:34:32.935 Starting thread on core 1 00:34:32.935 Starting thread on core 2 00:34:32.935 Starting thread on core 3 00:34:32.935 Starting thread on core 0 00:34:32.935 14:10:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:32.935 00:34:32.935 real 0m10.902s 00:34:32.935 user 0m18.753s 00:34:32.935 sys 0m5.326s 00:34:32.935 14:10:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:32.935 14:10:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.935 ************************************ 00:34:32.935 END TEST nvmf_target_disconnect_tc2 00:34:32.935 ************************************ 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:33.193 rmmod nvme_tcp 00:34:33.193 rmmod nvme_fabrics 00:34:33.193 rmmod nvme_keyring 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1613402 ']' 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1613402 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1613402 ']' 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 1613402 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1613402 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1613402' 00:34:33.193 killing process with pid 1613402 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 1613402 00:34:33.193 14:10:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 1613402 00:34:33.453 14:10:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:33.453 14:10:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:33.453 14:10:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:33.453 14:10:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:33.453 14:10:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:33.453 14:10:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.453 14:10:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:33.453 14:10:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.382 14:10:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:35.382 00:34:35.382 real 0m15.556s 00:34:35.382 user 0m45.397s 00:34:35.382 sys 0m7.160s 00:34:35.382 14:10:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:35.382 14:10:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:35.382 ************************************ 00:34:35.382 END TEST nvmf_target_disconnect 00:34:35.382 ************************************ 00:34:35.382 14:10:13 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:35.382 14:10:13 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.382 14:10:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.382 14:10:13 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:35.382 00:34:35.382 real 27m1.057s 00:34:35.382 user 74m31.243s 00:34:35.382 sys 6m11.578s 00:34:35.382 14:10:13 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:35.382 14:10:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.382 ************************************ 00:34:35.382 END TEST nvmf_tcp 00:34:35.382 ************************************ 00:34:35.382 14:10:13 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:35.382 14:10:13 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:35.382 14:10:13 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:35.382 14:10:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:35.382 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:34:35.382 ************************************ 00:34:35.382 START TEST spdkcli_nvmf_tcp 00:34:35.382 ************************************ 00:34:35.382 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:35.641 * Looking for test storage... 00:34:35.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1614590 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1614590 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 1614590 ']' 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:35.641 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.641 [2024-07-14 14:10:13.466345] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:35.641 [2024-07-14 14:10:13.466429] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614590 ] 00:34:35.641 EAL: No free 2048 kB hugepages reported on node 1 00:34:35.641 [2024-07-14 14:10:13.524042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:35.641 [2024-07-14 14:10:13.608970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.641 [2024-07-14 14:10:13.608974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.900 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:35.900 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:35.900 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:35.900 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.900 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.900 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:35.900 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:35.900 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:35.900 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:35.900 14:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.900 14:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:35.900 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:35.900 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:35.900 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:35.900 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:35.900 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:35.900 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:35.900 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:35.900 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:35.900 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:35.900 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:35.900 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:35.900 ' 00:34:38.457 [2024-07-14 14:10:16.281124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.838 [2024-07-14 14:10:17.513462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:42.372 [2024-07-14 14:10:19.796495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:43.799 [2024-07-14 14:10:21.746713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:45.697 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:45.697 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:45.697 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:45.697 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:45.697 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:45.697 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:45.697 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:45.697 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:45.697 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:45.697 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:45.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:45.697 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:45.697 14:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:45.697 14:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.697 14:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.697 14:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:45.697 14:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:45.697 14:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.697 14:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:45.697 14:10:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:45.954 14:10:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:45.954 14:10:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:45.954 14:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:45.954 14:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.954 14:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.954 14:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:45.954 14:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:45.954 14:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.954 14:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:45.954 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:45.954 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:45.954 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:45.954 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:45.954 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:45.954 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:45.954 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:45.954 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:45.954 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:45.954 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:45.954 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:45.954 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:45.954 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:45.954 ' 00:34:51.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:51.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:51.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:51.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:51.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:51.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:51.217 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:51.217 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:51.217 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:51.217 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:51.217 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:51.217 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:51.217 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:51.217 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1614590 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1614590 ']' 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1614590 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1614590 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1614590' 00:34:51.217 killing process with pid 1614590 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 1614590 00:34:51.217 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 1614590 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1614590 ']' 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1614590 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1614590 ']' 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1614590 00:34:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1614590) - No such process 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 1614590 is not found' 00:34:51.476 Process with pid 1614590 is not found 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:51.476 00:34:51.476 real 0m15.960s 00:34:51.476 user 0m33.723s 00:34:51.476 sys 0m0.813s 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:51.476 14:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:51.476 ************************************ 00:34:51.476 END TEST spdkcli_nvmf_tcp 00:34:51.476 ************************************ 00:34:51.476 14:10:29 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:51.476 14:10:29 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:51.476 14:10:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:51.476 14:10:29 -- common/autotest_common.sh@10 -- # set +x 00:34:51.476 ************************************ 00:34:51.476 START TEST nvmf_identify_passthru 00:34:51.476 ************************************ 00:34:51.476 14:10:29 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:51.476 * Looking for test storage... 00:34:51.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:51.476 14:10:29 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.476 14:10:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.476 14:10:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.476 14:10:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.476 14:10:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.476 14:10:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.476 14:10:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.476 14:10:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:51.476 14:10:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:51.476 14:10:29 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.476 14:10:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.476 14:10:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.476 14:10:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.476 14:10:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.476 14:10:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.476 14:10:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.476 14:10:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:51.476 14:10:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.476 14:10:29 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.476 14:10:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:51.476 14:10:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:51.476 14:10:29 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:51.476 14:10:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:53.378 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:53.378 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:53.378 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:53.379 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:53.379 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.379 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:53.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:34:53.636 00:34:53.636 --- 10.0.0.2 ping statistics --- 00:34:53.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.636 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:34:53.636 00:34:53.636 --- 10.0.0.1 ping statistics --- 00:34:53.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.636 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:53.636 14:10:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:53.636 14:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:53.636 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:53.636 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:53.637 14:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:34:53.637 14:10:31 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:34:53.637 14:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:53.637 14:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:53.637 14:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:53.637 14:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:53.637 14:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:53.637 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.815 14:10:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:57.815 14:10:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:57.815 14:10:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:57.815 14:10:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:57.815 EAL: No free 2048 kB hugepages reported on node 1 00:35:01.999 14:10:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:01.999 14:10:39 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:01.999 14:10:39 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:01.999 14:10:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:01.999 14:10:39 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:01.999 14:10:39 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:01.999 14:10:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:01.999 14:10:39 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1619092 00:35:01.999 14:10:39 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:01.999 14:10:39 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:01.999 14:10:39 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1619092 00:35:01.999 14:10:39 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 1619092 ']' 00:35:01.999 14:10:39 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.999 14:10:39 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:01.999 14:10:39 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.999 14:10:39 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:02.258 14:10:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.258 [2024-07-14 14:10:40.029211] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:02.258 [2024-07-14 14:10:40.029301] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.258 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.258 [2024-07-14 14:10:40.101971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:02.258 [2024-07-14 14:10:40.193728] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.258 [2024-07-14 14:10:40.193784] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.258 [2024-07-14 14:10:40.193800] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.258 [2024-07-14 14:10:40.193814] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.258 [2024-07-14 14:10:40.193825] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.258 [2024-07-14 14:10:40.193911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.258 [2024-07-14 14:10:40.193950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:02.258 [2024-07-14 14:10:40.193972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:02.258 [2024-07-14 14:10:40.193975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.258 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:02.258 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:35:02.258 14:10:40 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:02.258 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.258 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.258 INFO: Log level set to 20 00:35:02.258 INFO: Requests: 00:35:02.258 { 00:35:02.258 "jsonrpc": "2.0", 00:35:02.258 "method": "nvmf_set_config", 00:35:02.258 "id": 1, 00:35:02.258 "params": { 00:35:02.258 "admin_cmd_passthru": { 00:35:02.258 "identify_ctrlr": true 00:35:02.258 } 00:35:02.258 } 00:35:02.258 } 00:35:02.258 00:35:02.258 INFO: response: 00:35:02.258 { 00:35:02.258 "jsonrpc": "2.0", 00:35:02.258 "id": 1, 00:35:02.258 "result": true 00:35:02.258 } 00:35:02.258 00:35:02.258 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.258 14:10:40 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:02.258 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.258 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.515 INFO: Setting log level to 20 00:35:02.515 INFO: Setting log level to 20 00:35:02.515 INFO: Log level set to 20 00:35:02.515 INFO: Log level set to 20 00:35:02.515 INFO: Requests: 00:35:02.515 { 00:35:02.515 "jsonrpc": "2.0", 00:35:02.515 "method": "framework_start_init", 00:35:02.515 "id": 1 00:35:02.515 } 00:35:02.515 00:35:02.515 INFO: Requests: 00:35:02.515 { 00:35:02.515 "jsonrpc": "2.0", 00:35:02.515 "method": "framework_start_init", 00:35:02.515 "id": 1 00:35:02.515 } 00:35:02.515 00:35:02.515 [2024-07-14 14:10:40.342288] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:02.515 INFO: response: 00:35:02.515 { 00:35:02.515 "jsonrpc": "2.0", 00:35:02.515 "id": 1, 00:35:02.515 "result": true 00:35:02.515 } 00:35:02.515 00:35:02.515 INFO: response: 00:35:02.515 { 00:35:02.515 "jsonrpc": "2.0", 00:35:02.515 "id": 1, 00:35:02.515 "result": true 00:35:02.515 } 00:35:02.515 00:35:02.515 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.515 14:10:40 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:02.515 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.515 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.515 INFO: Setting log level to 40 00:35:02.515 INFO: Setting log level to 40 00:35:02.515 INFO: Setting log level to 40 00:35:02.515 [2024-07-14 14:10:40.352394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.515 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.515 14:10:40 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:02.515 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:02.515 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.515 14:10:40 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:02.515 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.515 14:10:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.787 Nvme0n1 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.787 [2024-07-14 14:10:43.244392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.787 [ 00:35:05.787 { 00:35:05.787 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:05.787 "subtype": "Discovery", 00:35:05.787 "listen_addresses": [], 00:35:05.787 "allow_any_host": true, 00:35:05.787 "hosts": [] 00:35:05.787 }, 00:35:05.787 { 00:35:05.787 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:05.787 "subtype": "NVMe", 00:35:05.787 "listen_addresses": [ 00:35:05.787 { 00:35:05.787 "trtype": "TCP", 00:35:05.787 "adrfam": "IPv4", 00:35:05.787 "traddr": "10.0.0.2", 00:35:05.787 "trsvcid": "4420" 00:35:05.787 } 00:35:05.787 ], 00:35:05.787 "allow_any_host": true, 00:35:05.787 "hosts": [], 00:35:05.787 "serial_number": "SPDK00000000000001", 00:35:05.787 "model_number": "SPDK bdev Controller", 00:35:05.787 "max_namespaces": 1, 00:35:05.787 "min_cntlid": 1, 00:35:05.787 "max_cntlid": 65519, 00:35:05.787 "namespaces": [ 00:35:05.787 { 00:35:05.787 "nsid": 1, 00:35:05.787 "bdev_name": "Nvme0n1", 00:35:05.787 "name": "Nvme0n1", 00:35:05.787 "nguid": "4903E5093CD844518773393F791A7E82", 00:35:05.787 "uuid": "4903e509-3cd8-4451-8773-393f791a7e82" 00:35:05.787 } 00:35:05.787 ] 00:35:05.787 } 00:35:05.787 ] 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:05.787 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:05.787 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:05.787 14:10:43 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:05.787 14:10:43 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:05.787 14:10:43 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:05.787 14:10:43 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:05.787 14:10:43 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:05.787 14:10:43 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:05.787 14:10:43 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:05.787 rmmod nvme_tcp 00:35:05.787 rmmod nvme_fabrics 00:35:05.787 rmmod nvme_keyring 00:35:05.787 14:10:43 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:05.787 14:10:43 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:05.787 14:10:43 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:05.787 14:10:43 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1619092 ']' 00:35:05.787 14:10:43 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1619092 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 1619092 ']' 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 1619092 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1619092 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1619092' 00:35:05.787 killing process with pid 1619092 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 1619092 00:35:05.787 14:10:43 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 1619092 00:35:07.686 14:10:45 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:07.686 14:10:45 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:07.686 14:10:45 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:07.686 14:10:45 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:07.686 14:10:45 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:07.686 14:10:45 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.686 14:10:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:07.686 14:10:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.588 14:10:47 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:09.588 00:35:09.588 real 0m17.894s 00:35:09.588 user 0m26.432s 00:35:09.588 sys 0m2.288s 00:35:09.588 14:10:47 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:09.588 14:10:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.588 ************************************ 00:35:09.588 END TEST nvmf_identify_passthru 00:35:09.588 ************************************ 00:35:09.588 14:10:47 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:09.588 14:10:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:09.588 14:10:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:09.588 14:10:47 -- common/autotest_common.sh@10 -- # set +x 00:35:09.588 ************************************ 00:35:09.588 START TEST nvmf_dif 00:35:09.588 ************************************ 00:35:09.588 14:10:47 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:09.588 * Looking for test storage... 00:35:09.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:09.588 14:10:47 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.588 14:10:47 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.588 14:10:47 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.588 14:10:47 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.588 14:10:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.588 14:10:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.588 14:10:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.588 14:10:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:09.588 14:10:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:09.588 14:10:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:09.589 14:10:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:09.589 14:10:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:09.589 14:10:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:09.589 14:10:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:09.589 14:10:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.589 14:10:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:09.589 14:10:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:09.589 14:10:47 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:09.589 14:10:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:11.490 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:11.490 14:10:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:11.491 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:11.491 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:11.491 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:11.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:11.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:35:11.491 00:35:11.491 --- 10.0.0.2 ping statistics --- 00:35:11.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.491 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:11.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:11.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:35:11.491 00:35:11.491 --- 10.0.0.1 ping statistics --- 00:35:11.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.491 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:11.491 14:10:49 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:12.426 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:12.426 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:12.426 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:12.426 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:12.426 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:12.426 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:12.426 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:12.426 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:12.685 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:12.685 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:12.685 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:12.685 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:12.685 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:12.685 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:12.685 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:12.685 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:12.685 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:12.685 14:10:50 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:12.685 14:10:50 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:12.685 14:10:50 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:12.685 14:10:50 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:12.685 14:10:50 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:12.685 14:10:50 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:12.685 14:10:50 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:12.685 14:10:50 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:12.685 14:10:50 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:12.685 14:10:50 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:12.685 14:10:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:12.685 14:10:50 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1622224 00:35:12.685 14:10:50 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:12.685 14:10:50 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1622224 00:35:12.685 14:10:50 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 1622224 ']' 00:35:12.685 14:10:50 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.685 14:10:50 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:12.685 14:10:50 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.685 14:10:50 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:12.685 14:10:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:12.685 [2024-07-14 14:10:50.661172] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:12.685 [2024-07-14 14:10:50.661274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:12.943 EAL: No free 2048 kB hugepages reported on node 1 00:35:12.943 [2024-07-14 14:10:50.726321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.943 [2024-07-14 14:10:50.811007] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:12.943 [2024-07-14 14:10:50.811059] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:12.943 [2024-07-14 14:10:50.811087] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:12.943 [2024-07-14 14:10:50.811098] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:12.943 [2024-07-14 14:10:50.811108] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:12.943 [2024-07-14 14:10:50.811133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.943 14:10:50 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:12.943 14:10:50 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:12.943 14:10:50 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:12.943 14:10:50 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:12.943 14:10:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:13.202 14:10:50 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:13.202 14:10:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:13.202 14:10:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:13.202 14:10:50 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.202 14:10:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:13.202 [2024-07-14 14:10:50.940319] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.202 14:10:50 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.202 14:10:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:13.202 14:10:50 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:13.202 14:10:50 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:13.202 14:10:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:13.202 ************************************ 00:35:13.202 START TEST fio_dif_1_default 00:35:13.202 ************************************ 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:13.202 bdev_null0 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:13.202 [2024-07-14 14:10:50.996581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:13.202 14:10:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.202 14:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:13.202 14:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:13.202 14:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:13.202 14:10:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:13.202 14:10:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:13.202 14:10:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:13.202 14:10:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:13.202 { 00:35:13.202 "params": { 00:35:13.202 "name": "Nvme$subsystem", 00:35:13.202 "trtype": "$TEST_TRANSPORT", 00:35:13.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.202 "adrfam": "ipv4", 00:35:13.202 "trsvcid": "$NVMF_PORT", 00:35:13.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.202 "hdgst": ${hdgst:-false}, 00:35:13.202 "ddgst": ${ddgst:-false} 00:35:13.202 }, 00:35:13.202 "method": "bdev_nvme_attach_controller" 00:35:13.202 } 00:35:13.202 EOF 00:35:13.202 )") 00:35:13.202 14:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.202 14:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:13.202 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.202 14:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:13.203 "params": { 00:35:13.203 "name": "Nvme0", 00:35:13.203 "trtype": "tcp", 00:35:13.203 "traddr": "10.0.0.2", 00:35:13.203 "adrfam": "ipv4", 00:35:13.203 "trsvcid": "4420", 00:35:13.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:13.203 "hdgst": false, 00:35:13.203 "ddgst": false 00:35:13.203 }, 00:35:13.203 "method": "bdev_nvme_attach_controller" 00:35:13.203 }' 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:13.203 14:10:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:13.461 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:13.461 fio-3.35 00:35:13.461 Starting 1 thread 00:35:13.461 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.656 00:35:25.656 filename0: (groupid=0, jobs=1): err= 0: pid=1622450: Sun Jul 14 14:11:01 2024 00:35:25.656 read: IOPS=190, BW=761KiB/s (779kB/s)(7632KiB/10034msec) 00:35:25.656 slat (nsec): min=4381, max=97864, avg=9510.04, stdev=3308.09 00:35:25.656 clat (usec): min=539, max=45864, avg=21005.81, stdev=20422.26 00:35:25.656 lat (usec): min=547, max=45890, avg=21015.32, stdev=20422.24 00:35:25.656 clat percentiles (usec): 00:35:25.656 | 1.00th=[ 562], 5.00th=[ 586], 10.00th=[ 603], 20.00th=[ 627], 00:35:25.656 | 30.00th=[ 644], 40.00th=[ 660], 50.00th=[ 791], 60.00th=[41157], 00:35:25.656 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:35:25.656 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:35:25.656 | 99.99th=[45876] 00:35:25.656 bw ( KiB/s): min= 704, max= 832, per=100.00%, avg=761.60, stdev=28.62, samples=20 00:35:25.656 iops : min= 176, max= 208, avg=190.40, stdev= 7.16, samples=20 00:35:25.656 lat (usec) : 750=48.22%, 1000=1.89% 00:35:25.656 lat (msec) : 50=49.90% 00:35:25.656 cpu : usr=89.04%, sys=10.42%, ctx=18, majf=0, minf=249 00:35:25.656 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.656 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.656 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:25.656 00:35:25.656 Run status group 0 (all jobs): 00:35:25.656 READ: bw=761KiB/s (779kB/s), 761KiB/s-761KiB/s (779kB/s-779kB/s), io=7632KiB (7815kB), run=10034-10034msec 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.656 00:35:25.656 real 0m11.120s 00:35:25.656 user 0m10.001s 00:35:25.656 sys 0m1.352s 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.656 ************************************ 00:35:25.656 END TEST fio_dif_1_default 00:35:25.656 ************************************ 00:35:25.656 14:11:02 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:25.656 14:11:02 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:25.656 14:11:02 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:25.656 14:11:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.656 ************************************ 00:35:25.656 START TEST fio_dif_1_multi_subsystems 00:35:25.656 ************************************ 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.656 bdev_null0 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.656 [2024-07-14 14:11:02.160726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:25.656 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.657 bdev_null1 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:25.657 { 00:35:25.657 "params": { 00:35:25.657 "name": "Nvme$subsystem", 00:35:25.657 "trtype": "$TEST_TRANSPORT", 00:35:25.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.657 "adrfam": "ipv4", 00:35:25.657 "trsvcid": "$NVMF_PORT", 00:35:25.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.657 "hdgst": ${hdgst:-false}, 00:35:25.657 "ddgst": ${ddgst:-false} 00:35:25.657 }, 00:35:25.657 "method": "bdev_nvme_attach_controller" 00:35:25.657 } 00:35:25.657 EOF 00:35:25.657 )") 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:25.657 { 00:35:25.657 "params": { 00:35:25.657 "name": "Nvme$subsystem", 00:35:25.657 "trtype": "$TEST_TRANSPORT", 00:35:25.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.657 "adrfam": "ipv4", 00:35:25.657 "trsvcid": "$NVMF_PORT", 00:35:25.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.657 "hdgst": ${hdgst:-false}, 00:35:25.657 "ddgst": ${ddgst:-false} 00:35:25.657 }, 00:35:25.657 "method": "bdev_nvme_attach_controller" 00:35:25.657 } 00:35:25.657 EOF 00:35:25.657 )") 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:25.657 "params": { 00:35:25.657 "name": "Nvme0", 00:35:25.657 "trtype": "tcp", 00:35:25.657 "traddr": "10.0.0.2", 00:35:25.657 "adrfam": "ipv4", 00:35:25.657 "trsvcid": "4420", 00:35:25.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:25.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:25.657 "hdgst": false, 00:35:25.657 "ddgst": false 00:35:25.657 }, 00:35:25.657 "method": "bdev_nvme_attach_controller" 00:35:25.657 },{ 00:35:25.657 "params": { 00:35:25.657 "name": "Nvme1", 00:35:25.657 "trtype": "tcp", 00:35:25.657 "traddr": "10.0.0.2", 00:35:25.657 "adrfam": "ipv4", 00:35:25.657 "trsvcid": "4420", 00:35:25.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:25.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:25.657 "hdgst": false, 00:35:25.657 "ddgst": false 00:35:25.657 }, 00:35:25.657 "method": "bdev_nvme_attach_controller" 00:35:25.657 }' 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:25.657 14:11:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.657 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:25.657 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:25.657 fio-3.35 00:35:25.657 Starting 2 threads 00:35:25.657 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.685 00:35:35.685 filename0: (groupid=0, jobs=1): err= 0: pid=1623854: Sun Jul 14 14:11:13 2024 00:35:35.685 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:35:35.685 slat (nsec): min=7037, max=34846, avg=8916.37, stdev=2485.73 00:35:35.685 clat (usec): min=40813, max=45111, avg=40989.37, stdev=264.51 00:35:35.685 lat (usec): min=40836, max=45146, avg=40998.28, stdev=265.12 00:35:35.685 clat percentiles (usec): 00:35:35.685 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:35.685 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:35.685 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:35.685 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:35:35.685 | 99.99th=[45351] 00:35:35.685 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:35:35.685 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:35.685 lat (msec) : 50=100.00% 00:35:35.685 cpu : usr=94.71%, sys=4.99%, ctx=15, majf=0, minf=90 00:35:35.685 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.685 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.685 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:35.685 filename1: (groupid=0, jobs=1): err= 0: pid=1623855: Sun Jul 14 14:11:13 2024 00:35:35.685 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:35:35.685 slat (nsec): min=7015, max=56371, avg=8960.29, stdev=3272.31 00:35:35.685 clat (usec): min=40805, max=43088, avg=40980.86, stdev=137.02 00:35:35.685 lat (usec): min=40813, max=43123, avg=40989.82, stdev=137.57 00:35:35.685 clat percentiles (usec): 00:35:35.685 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:35.685 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:35.685 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:35.685 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:35:35.685 | 99.99th=[43254] 00:35:35.685 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:35:35.685 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:35.685 lat (msec) : 50=100.00% 00:35:35.685 cpu : usr=94.90%, sys=4.80%, ctx=14, majf=0, minf=153 00:35:35.685 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.685 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.685 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:35.685 00:35:35.685 Run status group 0 (all jobs): 00:35:35.685 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10007-10009msec 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.685 00:35:35.685 real 0m11.212s 00:35:35.685 user 0m20.061s 00:35:35.685 sys 0m1.296s 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:35.685 14:11:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.685 ************************************ 00:35:35.685 END TEST fio_dif_1_multi_subsystems 00:35:35.685 ************************************ 00:35:35.685 14:11:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:35.685 14:11:13 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:35.685 14:11:13 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:35.685 14:11:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:35.685 ************************************ 00:35:35.685 START TEST fio_dif_rand_params 00:35:35.685 ************************************ 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:35.685 bdev_null0 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:35.685 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:35.686 [2024-07-14 14:11:13.415059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:35.686 { 00:35:35.686 "params": { 00:35:35.686 "name": "Nvme$subsystem", 00:35:35.686 "trtype": "$TEST_TRANSPORT", 00:35:35.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:35.686 "adrfam": "ipv4", 00:35:35.686 "trsvcid": "$NVMF_PORT", 00:35:35.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:35.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:35.686 "hdgst": ${hdgst:-false}, 00:35:35.686 "ddgst": ${ddgst:-false} 00:35:35.686 }, 00:35:35.686 "method": "bdev_nvme_attach_controller" 00:35:35.686 } 00:35:35.686 EOF 00:35:35.686 )") 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:35.686 "params": { 00:35:35.686 "name": "Nvme0", 00:35:35.686 "trtype": "tcp", 00:35:35.686 "traddr": "10.0.0.2", 00:35:35.686 "adrfam": "ipv4", 00:35:35.686 "trsvcid": "4420", 00:35:35.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:35.686 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:35.686 "hdgst": false, 00:35:35.686 "ddgst": false 00:35:35.686 }, 00:35:35.686 "method": "bdev_nvme_attach_controller" 00:35:35.686 }' 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:35.686 14:11:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.943 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:35.943 ... 00:35:35.943 fio-3.35 00:35:35.943 Starting 3 threads 00:35:35.943 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.494 00:35:42.494 filename0: (groupid=0, jobs=1): err= 0: pid=1625252: Sun Jul 14 14:11:19 2024 00:35:42.494 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(149MiB/5046msec) 00:35:42.494 slat (nsec): min=5002, max=71567, avg=15665.52, stdev=3988.99 00:35:42.494 clat (usec): min=4398, max=92585, avg=12631.97, stdev=8087.18 00:35:42.494 lat (usec): min=4411, max=92599, avg=12647.64, stdev=8087.20 00:35:42.494 clat percentiles (usec): 00:35:42.494 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 8029], 20.00th=[ 8848], 00:35:42.494 | 30.00th=[10421], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:35:42.494 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14353], 95.00th=[15533], 00:35:42.494 | 99.00th=[52691], 99.50th=[53216], 99.90th=[55313], 99.95th=[92799], 00:35:42.494 | 99.99th=[92799] 00:35:42.494 bw ( KiB/s): min=17920, max=40529, per=34.74%, avg=30472.10, stdev=6674.99, samples=10 00:35:42.494 iops : min= 140, max= 316, avg=238.00, stdev=52.04, samples=10 00:35:42.494 lat (msec) : 10=26.74%, 20=69.66%, 50=0.92%, 100=2.68% 00:35:42.494 cpu : usr=94.77%, sys=4.76%, ctx=8, majf=0, minf=136 00:35:42.494 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.494 issued rwts: total=1193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.494 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:42.494 filename0: (groupid=0, jobs=1): err= 0: pid=1625253: Sun Jul 14 14:11:19 2024 00:35:42.494 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(137MiB/5006msec) 00:35:42.494 slat (nsec): min=4549, max=63136, avg=16955.17, stdev=5057.26 00:35:42.494 clat (usec): min=4085, max=59387, avg=13728.35, stdev=8450.25 00:35:42.494 lat (usec): min=4097, max=59399, avg=13745.31, stdev=8449.82 00:35:42.494 clat percentiles (usec): 00:35:42.494 | 1.00th=[ 5211], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9765], 00:35:42.494 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12256], 60.00th=[12911], 00:35:42.494 | 70.00th=[13698], 80.00th=[14615], 90.00th=[15664], 95.00th=[16909], 00:35:42.494 | 99.00th=[53216], 99.50th=[53740], 99.90th=[57934], 99.95th=[59507], 00:35:42.494 | 99.99th=[59507] 00:35:42.494 bw ( KiB/s): min=23808, max=33792, per=31.78%, avg=27878.40, stdev=3847.48, samples=10 00:35:42.494 iops : min= 186, max= 264, avg=217.80, stdev=30.06, samples=10 00:35:42.494 lat (msec) : 10=21.15%, 20=74.45%, 50=1.65%, 100=2.75% 00:35:42.494 cpu : usr=94.33%, sys=5.21%, ctx=13, majf=0, minf=119 00:35:42.494 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.494 issued rwts: total=1092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.494 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:42.494 filename0: (groupid=0, jobs=1): err= 0: pid=1625254: Sun Jul 14 14:11:19 2024 00:35:42.494 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(147MiB/5006msec) 00:35:42.494 slat (nsec): min=4970, max=37786, avg=15629.32, stdev=3509.74 00:35:42.494 clat (usec): min=4603, max=54714, avg=12780.72, stdev=7954.19 00:35:42.494 lat (usec): min=4617, max=54737, avg=12796.35, stdev=7954.12 00:35:42.494 clat percentiles (usec): 00:35:42.494 | 1.00th=[ 5211], 5.00th=[ 6915], 10.00th=[ 8029], 20.00th=[ 8979], 00:35:42.494 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11863], 60.00th=[12256], 00:35:42.494 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14222], 95.00th=[15795], 00:35:42.494 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54264], 99.95th=[54789], 00:35:42.494 | 99.99th=[54789] 00:35:42.494 bw ( KiB/s): min=23040, max=37120, per=34.15%, avg=29952.00, stdev=4695.66, samples=10 00:35:42.494 iops : min= 180, max= 290, avg=234.00, stdev=36.68, samples=10 00:35:42.494 lat (msec) : 10=24.98%, 20=71.18%, 50=1.45%, 100=2.39% 00:35:42.494 cpu : usr=94.97%, sys=4.58%, ctx=17, majf=0, minf=85 00:35:42.494 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.494 issued rwts: total=1173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.494 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:42.494 00:35:42.494 Run status group 0 (all jobs): 00:35:42.494 READ: bw=85.7MiB/s (89.8MB/s), 27.3MiB/s-29.6MiB/s (28.6MB/s-31.0MB/s), io=432MiB (453MB), run=5006-5046msec 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.494 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 bdev_null0 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 [2024-07-14 14:11:19.555288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 bdev_null1 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 bdev_null2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.495 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.496 { 00:35:42.496 "params": { 00:35:42.496 "name": "Nvme$subsystem", 00:35:42.496 "trtype": "$TEST_TRANSPORT", 00:35:42.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.496 "adrfam": "ipv4", 00:35:42.496 "trsvcid": "$NVMF_PORT", 00:35:42.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.496 "hdgst": ${hdgst:-false}, 00:35:42.496 "ddgst": ${ddgst:-false} 00:35:42.496 }, 00:35:42.496 "method": "bdev_nvme_attach_controller" 00:35:42.496 } 00:35:42.496 EOF 00:35:42.496 )") 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.496 { 00:35:42.496 "params": { 00:35:42.496 "name": "Nvme$subsystem", 00:35:42.496 "trtype": "$TEST_TRANSPORT", 00:35:42.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.496 "adrfam": "ipv4", 00:35:42.496 "trsvcid": "$NVMF_PORT", 00:35:42.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.496 "hdgst": ${hdgst:-false}, 00:35:42.496 "ddgst": ${ddgst:-false} 00:35:42.496 }, 00:35:42.496 "method": "bdev_nvme_attach_controller" 00:35:42.496 } 00:35:42.496 EOF 00:35:42.496 )") 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.496 { 00:35:42.496 "params": { 00:35:42.496 "name": "Nvme$subsystem", 00:35:42.496 "trtype": "$TEST_TRANSPORT", 00:35:42.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.496 "adrfam": "ipv4", 00:35:42.496 "trsvcid": "$NVMF_PORT", 00:35:42.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.496 "hdgst": ${hdgst:-false}, 00:35:42.496 "ddgst": ${ddgst:-false} 00:35:42.496 }, 00:35:42.496 "method": "bdev_nvme_attach_controller" 00:35:42.496 } 00:35:42.496 EOF 00:35:42.496 )") 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:42.496 "params": { 00:35:42.496 "name": "Nvme0", 00:35:42.496 "trtype": "tcp", 00:35:42.496 "traddr": "10.0.0.2", 00:35:42.496 "adrfam": "ipv4", 00:35:42.496 "trsvcid": "4420", 00:35:42.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.496 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.496 "hdgst": false, 00:35:42.496 "ddgst": false 00:35:42.496 }, 00:35:42.496 "method": "bdev_nvme_attach_controller" 00:35:42.496 },{ 00:35:42.496 "params": { 00:35:42.496 "name": "Nvme1", 00:35:42.496 "trtype": "tcp", 00:35:42.496 "traddr": "10.0.0.2", 00:35:42.496 "adrfam": "ipv4", 00:35:42.496 "trsvcid": "4420", 00:35:42.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.496 "hdgst": false, 00:35:42.496 "ddgst": false 00:35:42.496 }, 00:35:42.496 "method": "bdev_nvme_attach_controller" 00:35:42.496 },{ 00:35:42.496 "params": { 00:35:42.496 "name": "Nvme2", 00:35:42.496 "trtype": "tcp", 00:35:42.496 "traddr": "10.0.0.2", 00:35:42.496 "adrfam": "ipv4", 00:35:42.496 "trsvcid": "4420", 00:35:42.496 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:42.496 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:42.496 "hdgst": false, 00:35:42.496 "ddgst": false 00:35:42.496 }, 00:35:42.496 "method": "bdev_nvme_attach_controller" 00:35:42.496 }' 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:42.496 14:11:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.496 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:42.496 ... 00:35:42.496 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:42.496 ... 00:35:42.496 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:42.496 ... 00:35:42.496 fio-3.35 00:35:42.496 Starting 24 threads 00:35:42.496 EAL: No free 2048 kB hugepages reported on node 1 00:35:54.691 00:35:54.691 filename0: (groupid=0, jobs=1): err= 0: pid=1626109: Sun Jul 14 14:11:30 2024 00:35:54.691 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.7MiB/10011msec) 00:35:54.691 slat (nsec): min=11883, max=87307, avg=36440.14, stdev=12286.04 00:35:54.691 clat (usec): min=10030, max=64717, avg=33074.41, stdev=2370.93 00:35:54.691 lat (usec): min=10056, max=64750, avg=33110.85, stdev=2370.36 00:35:54.691 clat percentiles (usec): 00:35:54.691 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:54.691 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:54.691 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.691 | 99.00th=[40109], 99.50th=[42206], 99.90th=[64750], 99.95th=[64750], 00:35:54.691 | 99.99th=[64750] 00:35:54.691 bw ( KiB/s): min= 1664, max= 2048, per=4.13%, avg=1906.53, stdev=72.59, samples=19 00:35:54.691 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:35:54.691 lat (msec) : 20=0.46%, 50=99.21%, 100=0.33% 00:35:54.691 cpu : usr=98.05%, sys=1.54%, ctx=16, majf=0, minf=46 00:35:54.691 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=49.9%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:54.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.691 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.691 issued rwts: total=4790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.691 filename0: (groupid=0, jobs=1): err= 0: pid=1626110: Sun Jul 14 14:11:30 2024 00:35:54.691 read: IOPS=484, BW=1939KiB/s (1985kB/s)(18.9MiB/10002msec) 00:35:54.691 slat (usec): min=8, max=123, avg=29.58, stdev=15.86 00:35:54.691 clat (usec): min=3866, max=49095, avg=32757.64, stdev=3168.22 00:35:54.691 lat (usec): min=3881, max=49124, avg=32787.22, stdev=3169.05 00:35:54.691 clat percentiles (usec): 00:35:54.691 | 1.00th=[11076], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:54.691 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:54.691 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.691 | 99.00th=[36439], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:35:54.691 | 99.99th=[49021] 00:35:54.691 bw ( KiB/s): min= 1792, max= 2176, per=4.21%, avg=1940.21, stdev=77.07, samples=19 00:35:54.691 iops : min= 448, max= 544, avg=485.05, stdev=19.27, samples=19 00:35:54.691 lat (msec) : 4=0.12%, 10=0.87%, 20=0.66%, 50=98.35% 00:35:54.691 cpu : usr=96.54%, sys=2.38%, ctx=91, majf=0, minf=26 00:35:54.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:54.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.691 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.691 filename0: (groupid=0, jobs=1): err= 0: pid=1626111: Sun Jul 14 14:11:30 2024 00:35:54.691 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:35:54.691 slat (usec): min=14, max=122, avg=40.09, stdev=17.49 00:35:54.691 clat (usec): min=19267, max=43455, avg=32929.91, stdev=1283.40 00:35:54.691 lat (usec): min=19315, max=43496, avg=32970.00, stdev=1284.70 00:35:54.691 clat percentiles (usec): 00:35:54.691 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:54.691 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:54.691 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:54.691 | 99.00th=[36439], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:35:54.691 | 99.99th=[43254] 00:35:54.691 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1913.26, stdev=29.37, samples=19 00:35:54.691 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:35:54.691 lat (msec) : 20=0.33%, 50=99.67% 00:35:54.691 cpu : usr=98.15%, sys=1.41%, ctx=13, majf=0, minf=36 00:35:54.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:54.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.691 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.691 filename0: (groupid=0, jobs=1): err= 0: pid=1626112: Sun Jul 14 14:11:30 2024 00:35:54.691 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10005msec) 00:35:54.691 slat (nsec): min=8178, max=83870, avg=34196.17, stdev=11555.26 00:35:54.691 clat (usec): min=9841, max=59882, avg=33031.19, stdev=2430.69 00:35:54.691 lat (usec): min=9850, max=59919, avg=33065.38, stdev=2431.45 00:35:54.691 clat percentiles (usec): 00:35:54.691 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:54.691 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:54.691 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.691 | 99.00th=[40109], 99.50th=[42206], 99.90th=[59507], 99.95th=[60031], 00:35:54.691 | 99.99th=[60031] 00:35:54.691 bw ( KiB/s): min= 1667, max= 1920, per=4.13%, avg=1906.68, stdev=58.04, samples=19 00:35:54.691 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:35:54.691 lat (msec) : 10=0.25%, 20=0.42%, 50=99.00%, 100=0.33% 00:35:54.691 cpu : usr=98.12%, sys=1.37%, ctx=30, majf=0, minf=44 00:35:54.691 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:54.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.691 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.691 filename0: (groupid=0, jobs=1): err= 0: pid=1626113: Sun Jul 14 14:11:30 2024 00:35:54.691 read: IOPS=481, BW=1927KiB/s (1974kB/s)(18.9MiB/10028msec) 00:35:54.691 slat (usec): min=8, max=219, avg=30.78, stdev=16.25 00:35:54.691 clat (usec): min=9325, max=44038, avg=32954.26, stdev=2071.43 00:35:54.691 lat (usec): min=9349, max=44241, avg=32985.04, stdev=2071.84 00:35:54.692 clat percentiles (usec): 00:35:54.692 | 1.00th=[22152], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:35:54.692 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:54.692 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.692 | 99.00th=[36439], 99.50th=[41157], 99.90th=[43254], 99.95th=[43779], 00:35:54.692 | 99.99th=[43779] 00:35:54.692 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1926.40, stdev=65.33, samples=20 00:35:54.692 iops : min= 448, max= 512, avg=481.60, stdev=16.33, samples=20 00:35:54.692 lat (msec) : 10=0.29%, 20=0.70%, 50=99.01% 00:35:54.692 cpu : usr=98.33%, sys=1.26%, ctx=16, majf=0, minf=39 00:35:54.692 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:54.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.692 filename0: (groupid=0, jobs=1): err= 0: pid=1626114: Sun Jul 14 14:11:30 2024 00:35:54.692 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:35:54.692 slat (usec): min=10, max=301, avg=39.88, stdev=13.14 00:35:54.692 clat (usec): min=19369, max=43394, avg=32985.49, stdev=1275.05 00:35:54.692 lat (usec): min=19394, max=43446, avg=33025.37, stdev=1275.13 00:35:54.692 clat percentiles (usec): 00:35:54.692 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:54.692 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:54.692 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.692 | 99.00th=[36439], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:35:54.692 | 99.99th=[43254] 00:35:54.692 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1913.26, stdev=29.37, samples=19 00:35:54.692 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:35:54.692 lat (msec) : 20=0.33%, 50=99.67% 00:35:54.692 cpu : usr=97.78%, sys=1.61%, ctx=75, majf=0, minf=39 00:35:54.692 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:54.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.692 filename0: (groupid=0, jobs=1): err= 0: pid=1626115: Sun Jul 14 14:11:30 2024 00:35:54.692 read: IOPS=502, BW=2010KiB/s (2058kB/s)(19.6MiB/10006msec) 00:35:54.692 slat (usec): min=7, max=193, avg=33.07, stdev=25.37 00:35:54.692 clat (usec): min=6080, max=99977, avg=31663.77, stdev=5479.47 00:35:54.692 lat (msec): min=6, max=100, avg=31.70, stdev= 5.48 00:35:54.692 clat percentiles (msec): 00:35:54.692 | 1.00th=[ 20], 5.00th=[ 22], 10.00th=[ 25], 20.00th=[ 32], 00:35:54.692 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:35:54.692 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:35:54.692 | 99.00th=[ 46], 99.50th=[ 50], 99.90th=[ 81], 99.95th=[ 81], 00:35:54.692 | 99.99th=[ 101] 00:35:54.692 bw ( KiB/s): min= 1648, max= 2656, per=4.33%, avg=1996.63, stdev=212.70, samples=19 00:35:54.692 iops : min= 412, max= 664, avg=499.16, stdev=53.18, samples=19 00:35:54.692 lat (msec) : 10=0.20%, 20=3.90%, 50=95.42%, 100=0.48% 00:35:54.692 cpu : usr=97.01%, sys=2.09%, ctx=64, majf=0, minf=33 00:35:54.692 IO depths : 1=0.2%, 2=1.8%, 4=7.3%, 8=75.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:35:54.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 complete : 0=0.0%, 4=90.4%, 8=7.2%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 issued rwts: total=5027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.692 filename0: (groupid=0, jobs=1): err= 0: pid=1626116: Sun Jul 14 14:11:30 2024 00:35:54.692 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10010msec) 00:35:54.692 slat (nsec): min=6801, max=83518, avg=34193.74, stdev=12673.20 00:35:54.692 clat (usec): min=9856, max=64345, avg=33077.44, stdev=2580.45 00:35:54.692 lat (usec): min=9901, max=64364, avg=33111.64, stdev=2580.08 00:35:54.692 clat percentiles (usec): 00:35:54.692 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:54.692 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:54.692 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.692 | 99.00th=[40633], 99.50th=[42206], 99.90th=[64226], 99.95th=[64226], 00:35:54.692 | 99.99th=[64226] 00:35:54.692 bw ( KiB/s): min= 1667, max= 2048, per=4.13%, avg=1906.68, stdev=72.04, samples=19 00:35:54.692 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:35:54.692 lat (msec) : 10=0.08%, 20=0.58%, 50=99.00%, 100=0.33% 00:35:54.692 cpu : usr=97.21%, sys=1.81%, ctx=169, majf=0, minf=44 00:35:54.692 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:54.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.692 filename1: (groupid=0, jobs=1): err= 0: pid=1626117: Sun Jul 14 14:11:30 2024 00:35:54.692 read: IOPS=481, BW=1927KiB/s (1974kB/s)(18.9MiB/10028msec) 00:35:54.692 slat (usec): min=10, max=125, avg=42.18, stdev=17.20 00:35:54.692 clat (usec): min=9508, max=43404, avg=32783.71, stdev=2010.67 00:35:54.692 lat (usec): min=9532, max=43454, avg=32825.89, stdev=2012.41 00:35:54.692 clat percentiles (usec): 00:35:54.692 | 1.00th=[28443], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:54.692 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:54.692 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:54.692 | 99.00th=[36439], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:35:54.692 | 99.99th=[43254] 00:35:54.692 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1926.40, stdev=65.33, samples=20 00:35:54.692 iops : min= 448, max= 512, avg=481.60, stdev=16.33, samples=20 00:35:54.692 lat (msec) : 10=0.29%, 20=0.70%, 50=99.01% 00:35:54.692 cpu : usr=98.11%, sys=1.47%, ctx=12, majf=0, minf=36 00:35:54.692 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:54.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.692 filename1: (groupid=0, jobs=1): err= 0: pid=1626118: Sun Jul 14 14:11:30 2024 00:35:54.692 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10006msec) 00:35:54.692 slat (usec): min=8, max=109, avg=41.03, stdev=17.54 00:35:54.692 clat (usec): min=6254, max=59710, avg=33035.62, stdev=2585.93 00:35:54.692 lat (usec): min=6266, max=59749, avg=33076.65, stdev=2586.01 00:35:54.692 clat percentiles (usec): 00:35:54.692 | 1.00th=[28705], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:54.692 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:54.692 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.692 | 99.00th=[40633], 99.50th=[42206], 99.90th=[59507], 99.95th=[59507], 00:35:54.692 | 99.99th=[59507] 00:35:54.692 bw ( KiB/s): min= 1667, max= 1920, per=4.13%, avg=1906.68, stdev=58.04, samples=19 00:35:54.692 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:35:54.692 lat (msec) : 10=0.33%, 20=0.33%, 50=99.00%, 100=0.33% 00:35:54.692 cpu : usr=96.99%, sys=2.03%, ctx=195, majf=0, minf=39 00:35:54.692 IO depths : 1=1.5%, 2=7.7%, 4=24.8%, 8=54.9%, 16=11.0%, 32=0.0%, >=64=0.0% 00:35:54.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.692 filename1: (groupid=0, jobs=1): err= 0: pid=1626119: Sun Jul 14 14:11:30 2024 00:35:54.692 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:35:54.692 slat (nsec): min=13473, max=96852, avg=39962.16, stdev=12014.56 00:35:54.692 clat (usec): min=19326, max=43402, avg=32983.35, stdev=1277.58 00:35:54.692 lat (usec): min=19360, max=43447, avg=33023.31, stdev=1277.36 00:35:54.692 clat percentiles (usec): 00:35:54.692 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:54.692 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:54.692 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.692 | 99.00th=[36439], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:35:54.692 | 99.99th=[43254] 00:35:54.692 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1913.26, stdev=29.37, samples=19 00:35:54.692 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:35:54.692 lat (msec) : 20=0.33%, 50=99.67% 00:35:54.692 cpu : usr=98.09%, sys=1.51%, ctx=14, majf=0, minf=40 00:35:54.692 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:54.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.692 filename1: (groupid=0, jobs=1): err= 0: pid=1626120: Sun Jul 14 14:11:30 2024 00:35:54.692 read: IOPS=484, BW=1939KiB/s (1985kB/s)(18.9MiB/10006msec) 00:35:54.692 slat (usec): min=7, max=135, avg=35.63, stdev=17.71 00:35:54.692 clat (usec): min=9174, max=80325, avg=32712.12, stdev=4143.66 00:35:54.692 lat (usec): min=9188, max=80359, avg=32747.75, stdev=4146.34 00:35:54.692 clat percentiles (usec): 00:35:54.692 | 1.00th=[19792], 5.00th=[28705], 10.00th=[32375], 20.00th=[32375], 00:35:54.692 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:54.692 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:54.692 | 99.00th=[43254], 99.50th=[48497], 99.90th=[80217], 99.95th=[80217], 00:35:54.692 | 99.99th=[80217] 00:35:54.692 bw ( KiB/s): min= 1664, max= 2272, per=4.18%, avg=1927.58, stdev=105.51, samples=19 00:35:54.692 iops : min= 416, max= 568, avg=481.89, stdev=26.38, samples=19 00:35:54.692 lat (msec) : 10=0.33%, 20=1.20%, 50=98.14%, 100=0.33% 00:35:54.692 cpu : usr=95.70%, sys=2.67%, ctx=633, majf=0, minf=48 00:35:54.692 IO depths : 1=4.5%, 2=9.0%, 4=18.4%, 8=58.8%, 16=9.4%, 32=0.0%, >=64=0.0% 00:35:54.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 complete : 0=0.0%, 4=92.7%, 8=2.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.692 issued rwts: total=4850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.692 filename1: (groupid=0, jobs=1): err= 0: pid=1626121: Sun Jul 14 14:11:30 2024 00:35:54.692 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.7MiB/10006msec) 00:35:54.692 slat (nsec): min=8213, max=88872, avg=39578.45, stdev=12194.36 00:35:54.692 clat (usec): min=9171, max=79637, avg=33050.20, stdev=3447.94 00:35:54.692 lat (usec): min=9185, max=79671, avg=33089.78, stdev=3448.90 00:35:54.692 clat percentiles (usec): 00:35:54.692 | 1.00th=[29230], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:54.693 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:54.693 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:54.693 | 99.00th=[40633], 99.50th=[52167], 99.90th=[79168], 99.95th=[79168], 00:35:54.693 | 99.99th=[79168] 00:35:54.693 bw ( KiB/s): min= 1667, max= 1920, per=4.12%, avg=1902.47, stdev=59.89, samples=19 00:35:54.693 iops : min= 416, max= 480, avg=475.58, stdev=15.14, samples=19 00:35:54.693 lat (msec) : 10=0.33%, 20=0.33%, 50=98.79%, 100=0.54% 00:35:54.693 cpu : usr=96.07%, sys=2.43%, ctx=211, majf=0, minf=46 00:35:54.693 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:54.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 issued rwts: total=4790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.693 filename1: (groupid=0, jobs=1): err= 0: pid=1626122: Sun Jul 14 14:11:30 2024 00:35:54.693 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10003msec) 00:35:54.693 slat (usec): min=8, max=126, avg=33.68, stdev=15.79 00:35:54.693 clat (usec): min=19426, max=43399, avg=33068.36, stdev=1287.30 00:35:54.693 lat (usec): min=19456, max=43417, avg=33102.04, stdev=1284.80 00:35:54.693 clat percentiles (usec): 00:35:54.693 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:54.693 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:54.693 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.693 | 99.00th=[36439], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:35:54.693 | 99.99th=[43254] 00:35:54.693 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.26, stdev=51.80, samples=19 00:35:54.693 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:35:54.693 lat (msec) : 20=0.33%, 50=99.67% 00:35:54.693 cpu : usr=98.23%, sys=1.37%, ctx=15, majf=0, minf=60 00:35:54.693 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:54.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.693 filename1: (groupid=0, jobs=1): err= 0: pid=1626123: Sun Jul 14 14:11:30 2024 00:35:54.693 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10002msec) 00:35:54.693 slat (nsec): min=8863, max=84643, avg=37064.44, stdev=11737.41 00:35:54.693 clat (usec): min=17750, max=66607, avg=33128.75, stdev=2308.72 00:35:54.693 lat (usec): min=17772, max=66641, avg=33165.81, stdev=2308.50 00:35:54.693 clat percentiles (usec): 00:35:54.693 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:54.693 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:54.693 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.693 | 99.00th=[40109], 99.50th=[42206], 99.90th=[66323], 99.95th=[66323], 00:35:54.693 | 99.99th=[66847] 00:35:54.693 bw ( KiB/s): min= 1667, max= 2048, per=4.13%, avg=1906.68, stdev=72.04, samples=19 00:35:54.693 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:35:54.693 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:35:54.693 cpu : usr=97.22%, sys=1.72%, ctx=128, majf=0, minf=36 00:35:54.693 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:54.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.693 filename1: (groupid=0, jobs=1): err= 0: pid=1626124: Sun Jul 14 14:11:30 2024 00:35:54.693 read: IOPS=482, BW=1931KiB/s (1978kB/s)(18.9MiB/10008msec) 00:35:54.693 slat (usec): min=8, max=221, avg=17.03, stdev=10.48 00:35:54.693 clat (usec): min=8209, max=44366, avg=32978.53, stdev=2392.64 00:35:54.693 lat (usec): min=8221, max=44385, avg=32995.55, stdev=2391.77 00:35:54.693 clat percentiles (usec): 00:35:54.693 | 1.00th=[26084], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:35:54.693 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:35:54.693 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.693 | 99.00th=[36439], 99.50th=[36963], 99.90th=[44303], 99.95th=[44303], 00:35:54.693 | 99.99th=[44303] 00:35:54.693 bw ( KiB/s): min= 1920, max= 2048, per=4.18%, avg=1926.74, stdev=29.37, samples=19 00:35:54.693 iops : min= 480, max= 512, avg=481.68, stdev= 7.34, samples=19 00:35:54.693 lat (msec) : 10=0.66%, 20=0.33%, 50=99.01% 00:35:54.693 cpu : usr=97.85%, sys=1.53%, ctx=107, majf=0, minf=71 00:35:54.693 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:54.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.693 filename2: (groupid=0, jobs=1): err= 0: pid=1626125: Sun Jul 14 14:11:30 2024 00:35:54.693 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10004msec) 00:35:54.693 slat (nsec): min=10513, max=78221, avg=38168.27, stdev=10370.90 00:35:54.693 clat (usec): min=19383, max=43436, avg=32997.84, stdev=1277.54 00:35:54.693 lat (usec): min=19412, max=43472, avg=33036.01, stdev=1277.55 00:35:54.693 clat percentiles (usec): 00:35:54.693 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:54.693 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:54.693 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.693 | 99.00th=[36439], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:35:54.693 | 99.99th=[43254] 00:35:54.693 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1913.26, stdev=29.37, samples=19 00:35:54.693 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:35:54.693 lat (msec) : 20=0.33%, 50=99.67% 00:35:54.693 cpu : usr=98.53%, sys=1.07%, ctx=16, majf=0, minf=42 00:35:54.693 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:54.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.693 filename2: (groupid=0, jobs=1): err= 0: pid=1626126: Sun Jul 14 14:11:30 2024 00:35:54.693 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:35:54.693 slat (usec): min=10, max=116, avg=46.63, stdev=16.46 00:35:54.693 clat (usec): min=19121, max=43441, avg=32930.33, stdev=1291.30 00:35:54.693 lat (usec): min=19163, max=43495, avg=32976.96, stdev=1290.24 00:35:54.693 clat percentiles (usec): 00:35:54.693 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:54.693 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:54.693 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:54.693 | 99.00th=[36439], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:35:54.693 | 99.99th=[43254] 00:35:54.693 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1913.26, stdev=29.37, samples=19 00:35:54.693 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:35:54.693 lat (msec) : 20=0.33%, 50=99.67% 00:35:54.693 cpu : usr=96.63%, sys=2.17%, ctx=141, majf=0, minf=44 00:35:54.693 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:54.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.693 filename2: (groupid=0, jobs=1): err= 0: pid=1626127: Sun Jul 14 14:11:30 2024 00:35:54.693 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10005msec) 00:35:54.693 slat (usec): min=13, max=111, avg=42.35, stdev=16.78 00:35:54.693 clat (usec): min=17748, max=69531, avg=33088.37, stdev=2440.32 00:35:54.693 lat (usec): min=17781, max=69626, avg=33130.72, stdev=2442.02 00:35:54.693 clat percentiles (usec): 00:35:54.693 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:54.693 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:54.693 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.693 | 99.00th=[40109], 99.50th=[41681], 99.90th=[68682], 99.95th=[69731], 00:35:54.693 | 99.99th=[69731] 00:35:54.693 bw ( KiB/s): min= 1664, max= 2048, per=4.13%, avg=1906.53, stdev=72.59, samples=19 00:35:54.693 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:35:54.693 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:35:54.693 cpu : usr=98.42%, sys=1.19%, ctx=15, majf=0, minf=54 00:35:54.693 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:54.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.693 filename2: (groupid=0, jobs=1): err= 0: pid=1626128: Sun Jul 14 14:11:30 2024 00:35:54.693 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:35:54.693 slat (usec): min=9, max=102, avg=39.78, stdev=12.81 00:35:54.693 clat (usec): min=19320, max=43396, avg=32997.53, stdev=1311.78 00:35:54.693 lat (usec): min=19348, max=43452, avg=33037.31, stdev=1311.27 00:35:54.693 clat percentiles (usec): 00:35:54.693 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:54.693 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:54.693 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.693 | 99.00th=[36439], 99.50th=[40633], 99.90th=[43254], 99.95th=[43254], 00:35:54.693 | 99.99th=[43254] 00:35:54.693 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.26, stdev=51.80, samples=19 00:35:54.693 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:35:54.693 lat (msec) : 20=0.33%, 50=99.67% 00:35:54.693 cpu : usr=97.67%, sys=1.55%, ctx=78, majf=0, minf=44 00:35:54.693 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:54.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.693 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.693 filename2: (groupid=0, jobs=1): err= 0: pid=1626129: Sun Jul 14 14:11:30 2024 00:35:54.693 read: IOPS=481, BW=1928KiB/s (1974kB/s)(18.9MiB/10025msec) 00:35:54.693 slat (nsec): min=6589, max=93236, avg=26958.86, stdev=14429.53 00:35:54.693 clat (usec): min=9070, max=43244, avg=32968.91, stdev=2067.25 00:35:54.693 lat (usec): min=9093, max=43268, avg=32995.87, stdev=2067.71 00:35:54.693 clat percentiles (usec): 00:35:54.693 | 1.00th=[28967], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:35:54.693 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:35:54.693 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.693 | 99.00th=[36439], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:35:54.694 | 99.99th=[43254] 00:35:54.694 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1926.74, stdev=90.24, samples=19 00:35:54.694 iops : min= 448, max= 512, avg=481.68, stdev=22.56, samples=19 00:35:54.694 lat (msec) : 10=0.33%, 20=0.39%, 50=99.28% 00:35:54.694 cpu : usr=98.27%, sys=1.34%, ctx=14, majf=0, minf=40 00:35:54.694 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:54.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.694 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.694 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.694 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.694 filename2: (groupid=0, jobs=1): err= 0: pid=1626130: Sun Jul 14 14:11:30 2024 00:35:54.694 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10014msec) 00:35:54.694 slat (nsec): min=7231, max=78154, avg=21750.46, stdev=11593.64 00:35:54.694 clat (usec): min=3751, max=44366, avg=32733.16, stdev=3226.92 00:35:54.694 lat (usec): min=3768, max=44386, avg=32754.91, stdev=3226.79 00:35:54.694 clat percentiles (usec): 00:35:54.694 | 1.00th=[11469], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:35:54.694 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:35:54.694 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.694 | 99.00th=[36439], 99.50th=[36963], 99.90th=[44303], 99.95th=[44303], 00:35:54.694 | 99.99th=[44303] 00:35:54.694 bw ( KiB/s): min= 1920, max= 2176, per=4.20%, avg=1939.20, stdev=62.64, samples=20 00:35:54.694 iops : min= 480, max= 544, avg=484.80, stdev=15.66, samples=20 00:35:54.694 lat (msec) : 4=0.14%, 10=0.84%, 20=0.72%, 50=98.29% 00:35:54.694 cpu : usr=97.92%, sys=1.56%, ctx=50, majf=0, minf=77 00:35:54.694 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:54.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.694 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.694 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.694 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.694 filename2: (groupid=0, jobs=1): err= 0: pid=1626131: Sun Jul 14 14:11:30 2024 00:35:54.694 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:35:54.694 slat (nsec): min=8252, max=93970, avg=30098.29, stdev=15353.15 00:35:54.694 clat (usec): min=18077, max=67409, avg=33196.37, stdev=2347.81 00:35:54.694 lat (usec): min=18101, max=67434, avg=33226.46, stdev=2347.46 00:35:54.694 clat percentiles (usec): 00:35:54.694 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:35:54.694 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:35:54.694 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.694 | 99.00th=[40633], 99.50th=[42206], 99.90th=[67634], 99.95th=[67634], 00:35:54.694 | 99.99th=[67634] 00:35:54.694 bw ( KiB/s): min= 1664, max= 2048, per=4.13%, avg=1906.53, stdev=72.59, samples=19 00:35:54.694 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:35:54.694 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:35:54.694 cpu : usr=98.43%, sys=1.18%, ctx=14, majf=0, minf=43 00:35:54.694 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:54.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.694 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.694 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.694 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.694 filename2: (groupid=0, jobs=1): err= 0: pid=1626132: Sun Jul 14 14:11:30 2024 00:35:54.694 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10001msec) 00:35:54.694 slat (nsec): min=10899, max=84844, avg=35822.89, stdev=11054.49 00:35:54.694 clat (usec): min=17697, max=72154, avg=33122.12, stdev=2311.53 00:35:54.694 lat (usec): min=17730, max=72186, avg=33157.94, stdev=2310.76 00:35:54.694 clat percentiles (usec): 00:35:54.694 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:54.694 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:54.694 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:54.694 | 99.00th=[40109], 99.50th=[42206], 99.90th=[65274], 99.95th=[65799], 00:35:54.694 | 99.99th=[71828] 00:35:54.694 bw ( KiB/s): min= 1664, max= 2048, per=4.13%, avg=1906.53, stdev=72.59, samples=19 00:35:54.694 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:35:54.694 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:35:54.694 cpu : usr=98.25%, sys=1.37%, ctx=15, majf=0, minf=34 00:35:54.694 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:54.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.694 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.694 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.694 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:54.694 00:35:54.694 Run status group 0 (all jobs): 00:35:54.694 READ: bw=45.0MiB/s (47.2MB/s), 1913KiB/s-2010KiB/s (1959kB/s-2058kB/s), io=452MiB (474MB), run=10001-10028msec 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.694 bdev_null0 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.694 [2024-07-14 14:11:31.257793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.694 bdev_null1 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.694 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:54.695 { 00:35:54.695 "params": { 00:35:54.695 "name": "Nvme$subsystem", 00:35:54.695 "trtype": "$TEST_TRANSPORT", 00:35:54.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.695 "adrfam": "ipv4", 00:35:54.695 "trsvcid": "$NVMF_PORT", 00:35:54.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.695 "hdgst": ${hdgst:-false}, 00:35:54.695 "ddgst": ${ddgst:-false} 00:35:54.695 }, 00:35:54.695 "method": "bdev_nvme_attach_controller" 00:35:54.695 } 00:35:54.695 EOF 00:35:54.695 )") 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:54.695 { 00:35:54.695 "params": { 00:35:54.695 "name": "Nvme$subsystem", 00:35:54.695 "trtype": "$TEST_TRANSPORT", 00:35:54.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.695 "adrfam": "ipv4", 00:35:54.695 "trsvcid": "$NVMF_PORT", 00:35:54.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.695 "hdgst": ${hdgst:-false}, 00:35:54.695 "ddgst": ${ddgst:-false} 00:35:54.695 }, 00:35:54.695 "method": "bdev_nvme_attach_controller" 00:35:54.695 } 00:35:54.695 EOF 00:35:54.695 )") 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:54.695 "params": { 00:35:54.695 "name": "Nvme0", 00:35:54.695 "trtype": "tcp", 00:35:54.695 "traddr": "10.0.0.2", 00:35:54.695 "adrfam": "ipv4", 00:35:54.695 "trsvcid": "4420", 00:35:54.695 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.695 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.695 "hdgst": false, 00:35:54.695 "ddgst": false 00:35:54.695 }, 00:35:54.695 "method": "bdev_nvme_attach_controller" 00:35:54.695 },{ 00:35:54.695 "params": { 00:35:54.695 "name": "Nvme1", 00:35:54.695 "trtype": "tcp", 00:35:54.695 "traddr": "10.0.0.2", 00:35:54.695 "adrfam": "ipv4", 00:35:54.695 "trsvcid": "4420", 00:35:54.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:54.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:54.695 "hdgst": false, 00:35:54.695 "ddgst": false 00:35:54.695 }, 00:35:54.695 "method": "bdev_nvme_attach_controller" 00:35:54.695 }' 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:54.695 14:11:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.695 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:54.695 ... 00:35:54.695 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:54.695 ... 00:35:54.695 fio-3.35 00:35:54.695 Starting 4 threads 00:35:54.695 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.954 00:35:59.954 filename0: (groupid=0, jobs=1): err= 0: pid=1627400: Sun Jul 14 14:11:37 2024 00:35:59.954 read: IOPS=1814, BW=14.2MiB/s (14.9MB/s)(70.9MiB/5002msec) 00:35:59.954 slat (nsec): min=6530, max=71120, avg=17874.93, stdev=10648.20 00:35:59.954 clat (usec): min=748, max=8197, avg=4344.52, stdev=512.20 00:35:59.954 lat (usec): min=755, max=8211, avg=4362.39, stdev=512.28 00:35:59.954 clat percentiles (usec): 00:35:59.954 | 1.00th=[ 2835], 5.00th=[ 3687], 10.00th=[ 3884], 20.00th=[ 4080], 00:35:59.954 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:35:59.954 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 5014], 00:35:59.954 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7898], 99.95th=[ 7898], 00:35:59.954 | 99.99th=[ 8225] 00:35:59.954 bw ( KiB/s): min=13947, max=14944, per=25.30%, avg=14513.10, stdev=327.16, samples=10 00:35:59.954 iops : min= 1743, max= 1868, avg=1814.10, stdev=40.97, samples=10 00:35:59.954 lat (usec) : 750=0.01%, 1000=0.01% 00:35:59.954 lat (msec) : 2=0.26%, 4=14.06%, 10=85.66% 00:35:59.954 cpu : usr=95.08%, sys=4.36%, ctx=16, majf=0, minf=108 00:35:59.954 IO depths : 1=0.4%, 2=18.0%, 4=54.9%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.954 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.954 issued rwts: total=9077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.954 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:59.954 filename0: (groupid=0, jobs=1): err= 0: pid=1627401: Sun Jul 14 14:11:37 2024 00:35:59.954 read: IOPS=1772, BW=13.8MiB/s (14.5MB/s)(69.2MiB/5001msec) 00:35:59.954 slat (nsec): min=6518, max=75430, avg=20813.15, stdev=9961.82 00:35:59.954 clat (usec): min=940, max=8287, avg=4437.80, stdev=668.11 00:35:59.954 lat (usec): min=953, max=8311, avg=4458.61, stdev=667.41 00:35:59.954 clat percentiles (usec): 00:35:59.954 | 1.00th=[ 2442], 5.00th=[ 3752], 10.00th=[ 4015], 20.00th=[ 4146], 00:35:59.954 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:35:59.954 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4948], 95.00th=[ 5669], 00:35:59.954 | 99.00th=[ 7046], 99.50th=[ 7439], 99.90th=[ 8094], 99.95th=[ 8225], 00:35:59.954 | 99.99th=[ 8291] 00:35:59.954 bw ( KiB/s): min=13280, max=14704, per=24.68%, avg=14159.44, stdev=506.78, samples=9 00:35:59.954 iops : min= 1660, max= 1838, avg=1769.89, stdev=63.38, samples=9 00:35:59.954 lat (usec) : 1000=0.03% 00:35:59.954 lat (msec) : 2=0.64%, 4=8.76%, 10=90.57% 00:35:59.954 cpu : usr=95.44%, sys=4.02%, ctx=12, majf=0, minf=64 00:35:59.954 IO depths : 1=0.2%, 2=17.2%, 4=55.5%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.954 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.954 issued rwts: total=8862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.954 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:59.954 filename1: (groupid=0, jobs=1): err= 0: pid=1627402: Sun Jul 14 14:11:37 2024 00:35:59.954 read: IOPS=1775, BW=13.9MiB/s (14.5MB/s)(69.4MiB/5001msec) 00:35:59.954 slat (usec): min=6, max=100, avg=20.47, stdev=10.70 00:35:59.954 clat (usec): min=959, max=8610, avg=4431.99, stdev=659.79 00:35:59.954 lat (usec): min=986, max=8618, avg=4452.46, stdev=658.83 00:35:59.954 clat percentiles (usec): 00:35:59.954 | 1.00th=[ 2343], 5.00th=[ 3720], 10.00th=[ 4015], 20.00th=[ 4146], 00:35:59.954 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:35:59.954 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4948], 95.00th=[ 5669], 00:35:59.954 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 8160], 99.95th=[ 8225], 00:35:59.954 | 99.99th=[ 8586] 00:35:59.954 bw ( KiB/s): min=13424, max=14752, per=24.74%, avg=14192.00, stdev=542.47, samples=9 00:35:59.954 iops : min= 1678, max= 1844, avg=1774.00, stdev=67.81, samples=9 00:35:59.954 lat (usec) : 1000=0.02% 00:35:59.954 lat (msec) : 2=0.66%, 4=8.72%, 10=90.60% 00:35:59.954 cpu : usr=95.48%, sys=4.04%, ctx=18, majf=0, minf=152 00:35:59.954 IO depths : 1=0.3%, 2=17.5%, 4=55.6%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.954 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.954 issued rwts: total=8879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.954 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:59.954 filename1: (groupid=0, jobs=1): err= 0: pid=1627403: Sun Jul 14 14:11:37 2024 00:35:59.954 read: IOPS=1809, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5003msec) 00:35:59.954 slat (nsec): min=6467, max=85446, avg=15409.95, stdev=9268.04 00:35:59.954 clat (usec): min=857, max=9548, avg=4366.19, stdev=518.52 00:35:59.954 lat (usec): min=874, max=9583, avg=4381.60, stdev=518.35 00:35:59.954 clat percentiles (usec): 00:35:59.954 | 1.00th=[ 3032], 5.00th=[ 3720], 10.00th=[ 3916], 20.00th=[ 4113], 00:35:59.954 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:35:59.954 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 5080], 00:35:59.954 | 99.00th=[ 6194], 99.50th=[ 6783], 99.90th=[ 8291], 99.95th=[ 9241], 00:35:59.954 | 99.99th=[ 9503] 00:35:59.954 bw ( KiB/s): min=13552, max=14976, per=25.23%, avg=14475.20, stdev=436.88, samples=10 00:35:59.954 iops : min= 1694, max= 1872, avg=1809.40, stdev=54.61, samples=10 00:35:59.954 lat (usec) : 1000=0.03% 00:35:59.954 lat (msec) : 2=0.20%, 4=12.79%, 10=86.98% 00:35:59.954 cpu : usr=95.08%, sys=4.44%, ctx=18, majf=0, minf=72 00:35:59.954 IO depths : 1=0.6%, 2=15.6%, 4=57.0%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.954 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.954 issued rwts: total=9055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.954 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:59.954 00:35:59.954 Run status group 0 (all jobs): 00:35:59.954 READ: bw=56.0MiB/s (58.7MB/s), 13.8MiB/s-14.2MiB/s (14.5MB/s-14.9MB/s), io=280MiB (294MB), run=5001-5003msec 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.954 00:35:59.954 real 0m24.341s 00:35:59.954 user 4m32.721s 00:35:59.954 sys 0m6.494s 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:59.954 14:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:59.954 ************************************ 00:35:59.954 END TEST fio_dif_rand_params 00:35:59.954 ************************************ 00:35:59.954 14:11:37 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:59.954 14:11:37 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:59.954 14:11:37 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:59.955 14:11:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:59.955 ************************************ 00:35:59.955 START TEST fio_dif_digest 00:35:59.955 ************************************ 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:59.955 bdev_null0 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:59.955 [2024-07-14 14:11:37.810745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:59.955 { 00:35:59.955 "params": { 00:35:59.955 "name": "Nvme$subsystem", 00:35:59.955 "trtype": "$TEST_TRANSPORT", 00:35:59.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:59.955 "adrfam": "ipv4", 00:35:59.955 "trsvcid": "$NVMF_PORT", 00:35:59.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:59.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:59.955 "hdgst": ${hdgst:-false}, 00:35:59.955 "ddgst": ${ddgst:-false} 00:35:59.955 }, 00:35:59.955 "method": "bdev_nvme_attach_controller" 00:35:59.955 } 00:35:59.955 EOF 00:35:59.955 )") 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:59.955 "params": { 00:35:59.955 "name": "Nvme0", 00:35:59.955 "trtype": "tcp", 00:35:59.955 "traddr": "10.0.0.2", 00:35:59.955 "adrfam": "ipv4", 00:35:59.955 "trsvcid": "4420", 00:35:59.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:59.955 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:59.955 "hdgst": true, 00:35:59.955 "ddgst": true 00:35:59.955 }, 00:35:59.955 "method": "bdev_nvme_attach_controller" 00:35:59.955 }' 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:59.955 14:11:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.214 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:00.214 ... 00:36:00.214 fio-3.35 00:36:00.214 Starting 3 threads 00:36:00.214 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.433 00:36:12.433 filename0: (groupid=0, jobs=1): err= 0: pid=1628274: Sun Jul 14 14:11:48 2024 00:36:12.433 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(255MiB/10046msec) 00:36:12.433 slat (nsec): min=4097, max=52340, avg=17228.66, stdev=5331.20 00:36:12.433 clat (usec): min=11135, max=52347, avg=14724.82, stdev=1534.06 00:36:12.433 lat (usec): min=11148, max=52360, avg=14742.05, stdev=1533.73 00:36:12.433 clat percentiles (usec): 00:36:12.433 | 1.00th=[12387], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:36:12.433 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:36:12.433 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16319], 00:36:12.433 | 99.00th=[17171], 99.50th=[17695], 99.90th=[22938], 99.95th=[49021], 00:36:12.433 | 99.99th=[52167] 00:36:12.433 bw ( KiB/s): min=25600, max=27136, per=34.37%, avg=26099.20, stdev=410.90, samples=20 00:36:12.433 iops : min= 200, max= 212, avg=203.90, stdev= 3.21, samples=20 00:36:12.433 lat (msec) : 20=99.76%, 50=0.20%, 100=0.05% 00:36:12.433 cpu : usr=93.99%, sys=5.54%, ctx=22, majf=0, minf=125 00:36:12.433 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.433 issued rwts: total=2041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.433 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:12.433 filename0: (groupid=0, jobs=1): err= 0: pid=1628275: Sun Jul 14 14:11:48 2024 00:36:12.433 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10046msec) 00:36:12.433 slat (nsec): min=5584, max=99279, avg=20027.48, stdev=4713.98 00:36:12.433 clat (usec): min=11616, max=52420, avg=15099.74, stdev=1546.35 00:36:12.433 lat (usec): min=11635, max=52439, avg=15119.77, stdev=1546.11 00:36:12.433 clat percentiles (usec): 00:36:12.433 | 1.00th=[12780], 5.00th=[13566], 10.00th=[13829], 20.00th=[14222], 00:36:12.433 | 30.00th=[14615], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:36:12.433 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16909], 00:36:12.433 | 99.00th=[17695], 99.50th=[18482], 99.90th=[49546], 99.95th=[52167], 00:36:12.433 | 99.99th=[52167] 00:36:12.433 bw ( KiB/s): min=23855, max=26112, per=33.52%, avg=25448.75, stdev=539.82, samples=20 00:36:12.433 iops : min= 186, max= 204, avg=198.80, stdev= 4.27, samples=20 00:36:12.433 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:36:12.433 cpu : usr=94.39%, sys=4.76%, ctx=95, majf=0, minf=152 00:36:12.433 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.433 issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.433 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:12.433 filename0: (groupid=0, jobs=1): err= 0: pid=1628276: Sun Jul 14 14:11:48 2024 00:36:12.433 read: IOPS=191, BW=24.0MiB/s (25.2MB/s)(241MiB/10044msec) 00:36:12.433 slat (nsec): min=4908, max=42464, avg=16341.07, stdev=4013.70 00:36:12.433 clat (usec): min=11965, max=54542, avg=15586.74, stdev=1550.62 00:36:12.433 lat (usec): min=11984, max=54563, avg=15603.08, stdev=1550.56 00:36:12.433 clat percentiles (usec): 00:36:12.433 | 1.00th=[13435], 5.00th=[14091], 10.00th=[14353], 20.00th=[14746], 00:36:12.434 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15664], 00:36:12.434 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16712], 95.00th=[17171], 00:36:12.434 | 99.00th=[18220], 99.50th=[18482], 99.90th=[49546], 99.95th=[54789], 00:36:12.434 | 99.99th=[54789] 00:36:12.434 bw ( KiB/s): min=22528, max=25344, per=32.47%, avg=24652.80, stdev=576.04, samples=20 00:36:12.434 iops : min= 176, max= 198, avg=192.60, stdev= 4.50, samples=20 00:36:12.434 lat (msec) : 20=99.74%, 50=0.21%, 100=0.05% 00:36:12.434 cpu : usr=94.25%, sys=5.02%, ctx=101, majf=0, minf=80 00:36:12.434 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.434 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.434 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:12.434 00:36:12.434 Run status group 0 (all jobs): 00:36:12.434 READ: bw=74.1MiB/s (77.7MB/s), 24.0MiB/s-25.4MiB/s (25.2MB/s-26.6MB/s), io=745MiB (781MB), run=10044-10046msec 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.434 00:36:12.434 real 0m11.184s 00:36:12.434 user 0m29.545s 00:36:12.434 sys 0m1.847s 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:12.434 14:11:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:12.434 ************************************ 00:36:12.434 END TEST fio_dif_digest 00:36:12.434 ************************************ 00:36:12.434 14:11:48 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:12.434 14:11:48 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:12.434 14:11:48 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:12.434 14:11:48 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:12.434 14:11:48 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:12.434 14:11:48 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:12.434 14:11:48 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:12.434 14:11:48 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:12.434 rmmod nvme_tcp 00:36:12.434 rmmod nvme_fabrics 00:36:12.434 rmmod nvme_keyring 00:36:12.434 14:11:49 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:12.434 14:11:49 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:12.434 14:11:49 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:12.434 14:11:49 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1622224 ']' 00:36:12.434 14:11:49 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1622224 00:36:12.434 14:11:49 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 1622224 ']' 00:36:12.434 14:11:49 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 1622224 00:36:12.434 14:11:49 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:12.434 14:11:49 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:12.434 14:11:49 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1622224 00:36:12.434 14:11:49 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:12.434 14:11:49 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:12.434 14:11:49 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1622224' 00:36:12.434 killing process with pid 1622224 00:36:12.434 14:11:49 nvmf_dif -- common/autotest_common.sh@965 -- # kill 1622224 00:36:12.434 14:11:49 nvmf_dif -- common/autotest_common.sh@970 -- # wait 1622224 00:36:12.434 14:11:49 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:12.434 14:11:49 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:12.434 Waiting for block devices as requested 00:36:12.434 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:12.691 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:12.691 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:12.949 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:12.949 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:12.949 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:12.949 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:13.206 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:13.206 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:13.206 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:13.206 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:13.465 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:13.465 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:13.465 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:13.722 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:13.722 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:13.722 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:13.980 14:11:51 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:13.980 14:11:51 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:13.980 14:11:51 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:13.980 14:11:51 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:13.980 14:11:51 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.980 14:11:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:13.980 14:11:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.882 14:11:53 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:15.882 00:36:15.882 real 1m6.453s 00:36:15.882 user 6m28.678s 00:36:15.882 sys 0m18.002s 00:36:15.882 14:11:53 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:15.882 14:11:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:15.882 ************************************ 00:36:15.882 END TEST nvmf_dif 00:36:15.882 ************************************ 00:36:15.882 14:11:53 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:15.882 14:11:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:15.882 14:11:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:15.882 14:11:53 -- common/autotest_common.sh@10 -- # set +x 00:36:15.882 ************************************ 00:36:15.883 START TEST nvmf_abort_qd_sizes 00:36:15.883 ************************************ 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:15.883 * Looking for test storage... 00:36:15.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.883 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:16.140 14:11:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:18.038 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:18.038 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:18.038 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:18.039 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:18.039 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:18.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:18.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:36:18.039 00:36:18.039 --- 10.0.0.2 ping statistics --- 00:36:18.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.039 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:18.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:18.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:36:18.039 00:36:18.039 --- 10.0.0.1 ping statistics --- 00:36:18.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.039 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:18.039 14:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:19.412 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:19.412 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:19.412 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:19.412 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:19.412 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:19.412 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:19.412 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:19.412 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:19.412 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:19.412 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:19.412 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:19.412 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:19.412 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:19.412 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:19.412 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:19.412 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:20.347 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1633057 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1633057 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 1633057 ']' 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:20.347 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:20.347 [2024-07-14 14:11:58.275650] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:20.347 [2024-07-14 14:11:58.275743] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:20.347 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.606 [2024-07-14 14:11:58.345249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:20.606 [2024-07-14 14:11:58.436018] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:20.606 [2024-07-14 14:11:58.436072] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:20.606 [2024-07-14 14:11:58.436100] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:20.606 [2024-07-14 14:11:58.436112] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:20.606 [2024-07-14 14:11:58.436121] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:20.606 [2024-07-14 14:11:58.436257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:20.606 [2024-07-14 14:11:58.436279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:20.606 [2024-07-14 14:11:58.436336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:20.606 [2024-07-14 14:11:58.436338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:20.606 14:11:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:20.864 14:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:20.864 14:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:20.864 14:11:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:20.864 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:20.864 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:20.864 14:11:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:20.864 ************************************ 00:36:20.864 START TEST spdk_target_abort 00:36:20.864 ************************************ 00:36:20.864 14:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:20.864 14:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:20.864 14:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:20.864 14:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.864 14:11:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.144 spdk_targetn1 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.144 [2024-07-14 14:12:01.449094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.144 [2024-07-14 14:12:01.481341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:24.144 14:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.144 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.668 Initializing NVMe Controllers 00:36:26.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:26.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:26.668 Initialization complete. Launching workers. 00:36:26.668 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12807, failed: 0 00:36:26.668 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1173, failed to submit 11634 00:36:26.668 success 746, unsuccess 427, failed 0 00:36:26.668 14:12:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:26.668 14:12:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:26.668 EAL: No free 2048 kB hugepages reported on node 1 00:36:29.945 Initializing NVMe Controllers 00:36:29.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:29.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:29.945 Initialization complete. Launching workers. 00:36:29.945 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8713, failed: 0 00:36:29.946 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1274, failed to submit 7439 00:36:29.946 success 312, unsuccess 962, failed 0 00:36:29.946 14:12:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:29.946 14:12:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:29.946 EAL: No free 2048 kB hugepages reported on node 1 00:36:33.221 Initializing NVMe Controllers 00:36:33.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:33.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:33.221 Initialization complete. Launching workers. 00:36:33.221 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31795, failed: 0 00:36:33.221 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2622, failed to submit 29173 00:36:33.221 success 511, unsuccess 2111, failed 0 00:36:33.221 14:12:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:33.221 14:12:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.221 14:12:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.221 14:12:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.221 14:12:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:33.221 14:12:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.221 14:12:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1633057 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 1633057 ']' 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 1633057 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1633057 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1633057' 00:36:34.594 killing process with pid 1633057 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 1633057 00:36:34.594 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 1633057 00:36:34.853 00:36:34.853 real 0m14.192s 00:36:34.853 user 0m54.015s 00:36:34.853 sys 0m2.391s 00:36:34.853 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:34.853 14:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:34.853 ************************************ 00:36:34.853 END TEST spdk_target_abort 00:36:34.853 ************************************ 00:36:34.853 14:12:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:34.853 14:12:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:34.853 14:12:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:34.853 14:12:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:35.113 ************************************ 00:36:35.113 START TEST kernel_target_abort 00:36:35.113 ************************************ 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:35.113 14:12:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:36.071 Waiting for block devices as requested 00:36:36.071 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:36.328 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:36.328 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:36.328 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:36.617 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:36.617 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:36.617 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:36.617 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:36.617 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:36.876 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:36.876 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:36.876 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:36.876 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:37.133 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:37.133 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:37.133 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:37.133 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:37.391 No valid GPT data, bailing 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:37.391 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:37.651 00:36:37.651 Discovery Log Number of Records 2, Generation counter 2 00:36:37.651 =====Discovery Log Entry 0====== 00:36:37.651 trtype: tcp 00:36:37.651 adrfam: ipv4 00:36:37.651 subtype: current discovery subsystem 00:36:37.651 treq: not specified, sq flow control disable supported 00:36:37.651 portid: 1 00:36:37.651 trsvcid: 4420 00:36:37.651 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:37.651 traddr: 10.0.0.1 00:36:37.651 eflags: none 00:36:37.651 sectype: none 00:36:37.651 =====Discovery Log Entry 1====== 00:36:37.651 trtype: tcp 00:36:37.651 adrfam: ipv4 00:36:37.651 subtype: nvme subsystem 00:36:37.651 treq: not specified, sq flow control disable supported 00:36:37.651 portid: 1 00:36:37.651 trsvcid: 4420 00:36:37.651 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:37.651 traddr: 10.0.0.1 00:36:37.651 eflags: none 00:36:37.651 sectype: none 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:37.651 14:12:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.651 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.939 Initializing NVMe Controllers 00:36:40.939 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:40.939 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:40.939 Initialization complete. Launching workers. 00:36:40.939 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 45897, failed: 0 00:36:40.939 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 45897, failed to submit 0 00:36:40.939 success 0, unsuccess 45897, failed 0 00:36:40.939 14:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:40.939 14:12:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:40.939 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.229 Initializing NVMe Controllers 00:36:44.229 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:44.229 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:44.229 Initialization complete. Launching workers. 00:36:44.229 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 88447, failed: 0 00:36:44.229 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22290, failed to submit 66157 00:36:44.229 success 0, unsuccess 22290, failed 0 00:36:44.230 14:12:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:44.230 14:12:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:44.230 EAL: No free 2048 kB hugepages reported on node 1 00:36:46.753 Initializing NVMe Controllers 00:36:46.753 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:46.753 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:46.753 Initialization complete. Launching workers. 00:36:46.753 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82848, failed: 0 00:36:46.753 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20682, failed to submit 62166 00:36:46.753 success 0, unsuccess 20682, failed 0 00:36:46.753 14:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:46.753 14:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:46.753 14:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:46.753 14:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:46.753 14:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:46.753 14:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:46.753 14:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:46.753 14:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:46.753 14:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:47.011 14:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:47.944 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:47.944 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:47.944 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:47.944 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:47.944 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:47.944 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:47.944 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:47.944 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:47.944 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:47.944 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:47.944 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:47.944 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:48.202 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:48.202 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:48.202 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:48.202 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:49.135 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:49.135 00:36:49.135 real 0m14.147s 00:36:49.135 user 0m6.324s 00:36:49.135 sys 0m3.132s 00:36:49.135 14:12:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:49.135 14:12:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.135 ************************************ 00:36:49.135 END TEST kernel_target_abort 00:36:49.135 ************************************ 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:49.135 rmmod nvme_tcp 00:36:49.135 rmmod nvme_fabrics 00:36:49.135 rmmod nvme_keyring 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1633057 ']' 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1633057 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 1633057 ']' 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 1633057 00:36:49.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1633057) - No such process 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 1633057 is not found' 00:36:49.135 Process with pid 1633057 is not found 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:49.135 14:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:50.509 Waiting for block devices as requested 00:36:50.510 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:50.510 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:50.510 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:50.510 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:50.768 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:50.768 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:50.768 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:50.768 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:51.026 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:51.026 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:51.026 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:51.026 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:51.284 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:51.284 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:51.284 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:51.284 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:51.541 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:51.541 14:12:29 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:51.541 14:12:29 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:51.541 14:12:29 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:51.541 14:12:29 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:51.541 14:12:29 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.541 14:12:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:51.541 14:12:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.071 14:12:31 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:54.071 00:36:54.071 real 0m37.661s 00:36:54.071 user 1m2.386s 00:36:54.071 sys 0m8.821s 00:36:54.071 14:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:54.071 14:12:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:54.071 ************************************ 00:36:54.071 END TEST nvmf_abort_qd_sizes 00:36:54.071 ************************************ 00:36:54.071 14:12:31 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:54.071 14:12:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:54.071 14:12:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:54.071 14:12:31 -- common/autotest_common.sh@10 -- # set +x 00:36:54.071 ************************************ 00:36:54.071 START TEST keyring_file 00:36:54.071 ************************************ 00:36:54.071 14:12:31 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:54.071 * Looking for test storage... 00:36:54.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:54.071 14:12:31 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:54.071 14:12:31 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.071 14:12:31 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.071 14:12:31 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.071 14:12:31 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.071 14:12:31 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.071 14:12:31 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.071 14:12:31 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.071 14:12:31 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:54.071 14:12:31 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:54.071 14:12:31 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:54.071 14:12:31 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:54.071 14:12:31 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:54.071 14:12:31 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:54.071 14:12:31 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:54.071 14:12:31 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:54.071 14:12:31 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:54.071 14:12:31 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:54.071 14:12:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:54.071 14:12:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:54.071 14:12:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ITMp6zC1bn 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ITMp6zC1bn 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ITMp6zC1bn 00:36:54.072 14:12:31 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ITMp6zC1bn 00:36:54.072 14:12:31 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5PMwXnhusO 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:54.072 14:12:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5PMwXnhusO 00:36:54.072 14:12:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5PMwXnhusO 00:36:54.072 14:12:31 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5PMwXnhusO 00:36:54.072 14:12:31 keyring_file -- keyring/file.sh@30 -- # tgtpid=1639424 00:36:54.072 14:12:31 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:54.072 14:12:31 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1639424 00:36:54.072 14:12:31 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1639424 ']' 00:36:54.072 14:12:31 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:54.072 14:12:31 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:54.072 14:12:31 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:54.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:54.072 14:12:31 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:54.072 14:12:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:54.072 [2024-07-14 14:12:31.718755] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:54.072 [2024-07-14 14:12:31.718854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639424 ] 00:36:54.072 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.072 [2024-07-14 14:12:31.780715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.072 [2024-07-14 14:12:31.870429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:54.329 14:12:32 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:54.329 [2024-07-14 14:12:32.109512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:54.329 null0 00:36:54.329 [2024-07-14 14:12:32.141567] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:54.329 [2024-07-14 14:12:32.142041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:54.329 [2024-07-14 14:12:32.149585] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.329 14:12:32 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:54.329 [2024-07-14 14:12:32.157596] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:54.329 request: 00:36:54.329 { 00:36:54.329 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:54.329 "secure_channel": false, 00:36:54.329 "listen_address": { 00:36:54.329 "trtype": "tcp", 00:36:54.329 "traddr": "127.0.0.1", 00:36:54.329 "trsvcid": "4420" 00:36:54.329 }, 00:36:54.329 "method": "nvmf_subsystem_add_listener", 00:36:54.329 "req_id": 1 00:36:54.329 } 00:36:54.329 Got JSON-RPC error response 00:36:54.329 response: 00:36:54.329 { 00:36:54.329 "code": -32602, 00:36:54.329 "message": "Invalid parameters" 00:36:54.329 } 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:54.329 14:12:32 keyring_file -- keyring/file.sh@46 -- # bperfpid=1639434 00:36:54.329 14:12:32 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:54.329 14:12:32 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1639434 /var/tmp/bperf.sock 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1639434 ']' 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:54.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:54.329 14:12:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:54.329 [2024-07-14 14:12:32.205434] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:54.329 [2024-07-14 14:12:32.205513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639434 ] 00:36:54.329 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.329 [2024-07-14 14:12:32.267221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.586 [2024-07-14 14:12:32.358140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.586 14:12:32 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:54.586 14:12:32 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:54.586 14:12:32 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ITMp6zC1bn 00:36:54.586 14:12:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ITMp6zC1bn 00:36:54.842 14:12:32 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5PMwXnhusO 00:36:54.842 14:12:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5PMwXnhusO 00:36:55.098 14:12:32 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:55.098 14:12:32 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:55.098 14:12:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.098 14:12:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:55.098 14:12:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.355 14:12:33 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ITMp6zC1bn == \/\t\m\p\/\t\m\p\.\I\T\M\p\6\z\C\1\b\n ]] 00:36:55.355 14:12:33 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:55.355 14:12:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:55.355 14:12:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.355 14:12:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:55.355 14:12:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.611 14:12:33 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.5PMwXnhusO == \/\t\m\p\/\t\m\p\.\5\P\M\w\X\n\h\u\s\O ]] 00:36:55.611 14:12:33 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:55.611 14:12:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:55.611 14:12:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.611 14:12:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.611 14:12:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.611 14:12:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:55.867 14:12:33 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:55.867 14:12:33 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:55.867 14:12:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:55.867 14:12:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.867 14:12:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.867 14:12:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.867 14:12:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:56.123 14:12:33 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:56.123 14:12:33 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.123 14:12:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:56.380 [2024-07-14 14:12:34.162968] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:56.380 nvme0n1 00:36:56.380 14:12:34 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:56.380 14:12:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:56.380 14:12:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.380 14:12:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.380 14:12:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.380 14:12:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:56.636 14:12:34 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:56.636 14:12:34 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:56.636 14:12:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:56.636 14:12:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.636 14:12:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.636 14:12:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.636 14:12:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:56.893 14:12:34 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:56.893 14:12:34 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:56.893 Running I/O for 1 seconds... 00:36:58.312 00:36:58.312 Latency(us) 00:36:58.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.312 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:58.312 nvme0n1 : 1.01 8193.99 32.01 0.00 0.00 15541.52 8592.50 27573.67 00:36:58.312 =================================================================================================================== 00:36:58.312 Total : 8193.99 32.01 0.00 0.00 15541.52 8592.50 27573.67 00:36:58.312 0 00:36:58.312 14:12:35 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:58.312 14:12:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:58.312 14:12:36 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:58.312 14:12:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:58.312 14:12:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:58.312 14:12:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:58.312 14:12:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.312 14:12:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:58.570 14:12:36 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:58.570 14:12:36 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:58.570 14:12:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:58.570 14:12:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:58.570 14:12:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:58.570 14:12:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.570 14:12:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:58.829 14:12:36 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:58.829 14:12:36 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:58.829 14:12:36 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:58.829 14:12:36 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:58.829 14:12:36 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:58.829 14:12:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:58.829 14:12:36 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:58.829 14:12:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:58.829 14:12:36 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:58.829 14:12:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:59.087 [2024-07-14 14:12:36.840142] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:59.087 [2024-07-14 14:12:36.840686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1657730 (107): Transport endpoint is not connected 00:36:59.087 [2024-07-14 14:12:36.841677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1657730 (9): Bad file descriptor 00:36:59.087 [2024-07-14 14:12:36.842675] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:59.087 [2024-07-14 14:12:36.842697] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:59.087 [2024-07-14 14:12:36.842711] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:59.087 request: 00:36:59.087 { 00:36:59.087 "name": "nvme0", 00:36:59.087 "trtype": "tcp", 00:36:59.087 "traddr": "127.0.0.1", 00:36:59.087 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:59.087 "adrfam": "ipv4", 00:36:59.087 "trsvcid": "4420", 00:36:59.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.087 "psk": "key1", 00:36:59.087 "method": "bdev_nvme_attach_controller", 00:36:59.087 "req_id": 1 00:36:59.087 } 00:36:59.087 Got JSON-RPC error response 00:36:59.087 response: 00:36:59.087 { 00:36:59.087 "code": -5, 00:36:59.087 "message": "Input/output error" 00:36:59.087 } 00:36:59.087 14:12:36 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:59.087 14:12:36 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:59.087 14:12:36 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:59.087 14:12:36 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:59.087 14:12:36 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:59.087 14:12:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:59.087 14:12:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:59.087 14:12:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.087 14:12:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.087 14:12:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:59.345 14:12:37 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:59.345 14:12:37 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:59.345 14:12:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:59.345 14:12:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:59.345 14:12:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.345 14:12:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.345 14:12:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:59.602 14:12:37 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:59.602 14:12:37 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:59.602 14:12:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:59.860 14:12:37 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:59.860 14:12:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:00.119 14:12:37 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:00.119 14:12:37 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:00.119 14:12:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.119 14:12:38 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:00.119 14:12:38 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ITMp6zC1bn 00:37:00.119 14:12:38 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ITMp6zC1bn 00:37:00.119 14:12:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:00.119 14:12:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ITMp6zC1bn 00:37:00.119 14:12:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:00.119 14:12:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.119 14:12:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:00.119 14:12:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.119 14:12:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ITMp6zC1bn 00:37:00.119 14:12:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ITMp6zC1bn 00:37:00.378 [2024-07-14 14:12:38.323625] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ITMp6zC1bn': 0100660 00:37:00.378 [2024-07-14 14:12:38.323665] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:00.378 request: 00:37:00.378 { 00:37:00.378 "name": "key0", 00:37:00.378 "path": "/tmp/tmp.ITMp6zC1bn", 00:37:00.378 "method": "keyring_file_add_key", 00:37:00.378 "req_id": 1 00:37:00.378 } 00:37:00.378 Got JSON-RPC error response 00:37:00.378 response: 00:37:00.378 { 00:37:00.378 "code": -1, 00:37:00.378 "message": "Operation not permitted" 00:37:00.378 } 00:37:00.378 14:12:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:00.378 14:12:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:00.378 14:12:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:00.378 14:12:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:00.378 14:12:38 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ITMp6zC1bn 00:37:00.378 14:12:38 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ITMp6zC1bn 00:37:00.378 14:12:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ITMp6zC1bn 00:37:00.636 14:12:38 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ITMp6zC1bn 00:37:00.636 14:12:38 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:00.636 14:12:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:00.636 14:12:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.636 14:12:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.636 14:12:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.636 14:12:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:00.895 14:12:38 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:00.895 14:12:38 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:00.895 14:12:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:00.895 14:12:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:00.895 14:12:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:00.895 14:12:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.895 14:12:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:00.895 14:12:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.895 14:12:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:00.895 14:12:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:01.153 [2024-07-14 14:12:39.057580] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ITMp6zC1bn': No such file or directory 00:37:01.153 [2024-07-14 14:12:39.057615] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:01.153 [2024-07-14 14:12:39.057646] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:01.153 [2024-07-14 14:12:39.057659] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:01.153 [2024-07-14 14:12:39.057672] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:01.153 request: 00:37:01.153 { 00:37:01.153 "name": "nvme0", 00:37:01.153 "trtype": "tcp", 00:37:01.153 "traddr": "127.0.0.1", 00:37:01.153 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:01.153 "adrfam": "ipv4", 00:37:01.153 "trsvcid": "4420", 00:37:01.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:01.153 "psk": "key0", 00:37:01.153 "method": "bdev_nvme_attach_controller", 00:37:01.153 "req_id": 1 00:37:01.153 } 00:37:01.153 Got JSON-RPC error response 00:37:01.153 response: 00:37:01.153 { 00:37:01.153 "code": -19, 00:37:01.153 "message": "No such device" 00:37:01.153 } 00:37:01.153 14:12:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:01.153 14:12:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:01.153 14:12:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:01.153 14:12:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:01.153 14:12:39 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:01.153 14:12:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:01.411 14:12:39 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:01.411 14:12:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:01.411 14:12:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:01.411 14:12:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:01.411 14:12:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:01.411 14:12:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:01.411 14:12:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9kjI8TUope 00:37:01.411 14:12:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:01.411 14:12:39 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:01.411 14:12:39 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:01.411 14:12:39 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:01.411 14:12:39 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:01.411 14:12:39 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:01.411 14:12:39 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:01.411 14:12:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9kjI8TUope 00:37:01.411 14:12:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9kjI8TUope 00:37:01.411 14:12:39 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.9kjI8TUope 00:37:01.411 14:12:39 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9kjI8TUope 00:37:01.411 14:12:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9kjI8TUope 00:37:01.668 14:12:39 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:01.668 14:12:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:01.925 nvme0n1 00:37:02.184 14:12:39 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:02.184 14:12:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:02.184 14:12:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.184 14:12:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.184 14:12:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.184 14:12:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:02.184 14:12:40 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:02.184 14:12:40 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:02.184 14:12:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:02.442 14:12:40 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:02.442 14:12:40 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:02.442 14:12:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.442 14:12:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:02.442 14:12:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.700 14:12:40 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:02.700 14:12:40 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:02.700 14:12:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:02.700 14:12:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.700 14:12:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.700 14:12:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.700 14:12:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:02.958 14:12:40 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:02.958 14:12:40 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:02.958 14:12:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:03.216 14:12:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:03.216 14:12:41 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:03.216 14:12:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.474 14:12:41 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:03.474 14:12:41 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9kjI8TUope 00:37:03.474 14:12:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9kjI8TUope 00:37:03.731 14:12:41 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5PMwXnhusO 00:37:03.731 14:12:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5PMwXnhusO 00:37:03.988 14:12:41 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:03.988 14:12:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.246 nvme0n1 00:37:04.246 14:12:42 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:04.246 14:12:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:04.811 14:12:42 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:04.811 "subsystems": [ 00:37:04.811 { 00:37:04.811 "subsystem": "keyring", 00:37:04.811 "config": [ 00:37:04.811 { 00:37:04.811 "method": "keyring_file_add_key", 00:37:04.811 "params": { 00:37:04.811 "name": "key0", 00:37:04.811 "path": "/tmp/tmp.9kjI8TUope" 00:37:04.811 } 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "method": "keyring_file_add_key", 00:37:04.811 "params": { 00:37:04.811 "name": "key1", 00:37:04.811 "path": "/tmp/tmp.5PMwXnhusO" 00:37:04.811 } 00:37:04.811 } 00:37:04.811 ] 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "subsystem": "iobuf", 00:37:04.811 "config": [ 00:37:04.811 { 00:37:04.811 "method": "iobuf_set_options", 00:37:04.811 "params": { 00:37:04.811 "small_pool_count": 8192, 00:37:04.811 "large_pool_count": 1024, 00:37:04.811 "small_bufsize": 8192, 00:37:04.811 "large_bufsize": 135168 00:37:04.811 } 00:37:04.811 } 00:37:04.811 ] 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "subsystem": "sock", 00:37:04.811 "config": [ 00:37:04.811 { 00:37:04.811 "method": "sock_set_default_impl", 00:37:04.811 "params": { 00:37:04.811 "impl_name": "posix" 00:37:04.811 } 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "method": "sock_impl_set_options", 00:37:04.811 "params": { 00:37:04.811 "impl_name": "ssl", 00:37:04.811 "recv_buf_size": 4096, 00:37:04.811 "send_buf_size": 4096, 00:37:04.811 "enable_recv_pipe": true, 00:37:04.811 "enable_quickack": false, 00:37:04.811 "enable_placement_id": 0, 00:37:04.811 "enable_zerocopy_send_server": true, 00:37:04.811 "enable_zerocopy_send_client": false, 00:37:04.811 "zerocopy_threshold": 0, 00:37:04.811 "tls_version": 0, 00:37:04.811 "enable_ktls": false 00:37:04.811 } 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "method": "sock_impl_set_options", 00:37:04.811 "params": { 00:37:04.811 "impl_name": "posix", 00:37:04.811 "recv_buf_size": 2097152, 00:37:04.811 "send_buf_size": 2097152, 00:37:04.811 "enable_recv_pipe": true, 00:37:04.811 "enable_quickack": false, 00:37:04.811 "enable_placement_id": 0, 00:37:04.811 "enable_zerocopy_send_server": true, 00:37:04.811 "enable_zerocopy_send_client": false, 00:37:04.811 "zerocopy_threshold": 0, 00:37:04.811 "tls_version": 0, 00:37:04.811 "enable_ktls": false 00:37:04.811 } 00:37:04.811 } 00:37:04.811 ] 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "subsystem": "vmd", 00:37:04.811 "config": [] 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "subsystem": "accel", 00:37:04.811 "config": [ 00:37:04.811 { 00:37:04.811 "method": "accel_set_options", 00:37:04.811 "params": { 00:37:04.811 "small_cache_size": 128, 00:37:04.811 "large_cache_size": 16, 00:37:04.811 "task_count": 2048, 00:37:04.811 "sequence_count": 2048, 00:37:04.811 "buf_count": 2048 00:37:04.811 } 00:37:04.811 } 00:37:04.811 ] 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "subsystem": "bdev", 00:37:04.811 "config": [ 00:37:04.811 { 00:37:04.811 "method": "bdev_set_options", 00:37:04.811 "params": { 00:37:04.811 "bdev_io_pool_size": 65535, 00:37:04.811 "bdev_io_cache_size": 256, 00:37:04.811 "bdev_auto_examine": true, 00:37:04.811 "iobuf_small_cache_size": 128, 00:37:04.811 "iobuf_large_cache_size": 16 00:37:04.811 } 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "method": "bdev_raid_set_options", 00:37:04.811 "params": { 00:37:04.811 "process_window_size_kb": 1024 00:37:04.811 } 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "method": "bdev_iscsi_set_options", 00:37:04.811 "params": { 00:37:04.811 "timeout_sec": 30 00:37:04.811 } 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "method": "bdev_nvme_set_options", 00:37:04.811 "params": { 00:37:04.811 "action_on_timeout": "none", 00:37:04.811 "timeout_us": 0, 00:37:04.811 "timeout_admin_us": 0, 00:37:04.811 "keep_alive_timeout_ms": 10000, 00:37:04.811 "arbitration_burst": 0, 00:37:04.811 "low_priority_weight": 0, 00:37:04.811 "medium_priority_weight": 0, 00:37:04.811 "high_priority_weight": 0, 00:37:04.811 "nvme_adminq_poll_period_us": 10000, 00:37:04.811 "nvme_ioq_poll_period_us": 0, 00:37:04.811 "io_queue_requests": 512, 00:37:04.811 "delay_cmd_submit": true, 00:37:04.811 "transport_retry_count": 4, 00:37:04.811 "bdev_retry_count": 3, 00:37:04.811 "transport_ack_timeout": 0, 00:37:04.811 "ctrlr_loss_timeout_sec": 0, 00:37:04.811 "reconnect_delay_sec": 0, 00:37:04.811 "fast_io_fail_timeout_sec": 0, 00:37:04.811 "disable_auto_failback": false, 00:37:04.811 "generate_uuids": false, 00:37:04.811 "transport_tos": 0, 00:37:04.811 "nvme_error_stat": false, 00:37:04.811 "rdma_srq_size": 0, 00:37:04.811 "io_path_stat": false, 00:37:04.811 "allow_accel_sequence": false, 00:37:04.811 "rdma_max_cq_size": 0, 00:37:04.811 "rdma_cm_event_timeout_ms": 0, 00:37:04.811 "dhchap_digests": [ 00:37:04.811 "sha256", 00:37:04.811 "sha384", 00:37:04.811 "sha512" 00:37:04.811 ], 00:37:04.811 "dhchap_dhgroups": [ 00:37:04.811 "null", 00:37:04.811 "ffdhe2048", 00:37:04.811 "ffdhe3072", 00:37:04.811 "ffdhe4096", 00:37:04.811 "ffdhe6144", 00:37:04.811 "ffdhe8192" 00:37:04.811 ] 00:37:04.811 } 00:37:04.811 }, 00:37:04.811 { 00:37:04.811 "method": "bdev_nvme_attach_controller", 00:37:04.811 "params": { 00:37:04.811 "name": "nvme0", 00:37:04.811 "trtype": "TCP", 00:37:04.811 "adrfam": "IPv4", 00:37:04.811 "traddr": "127.0.0.1", 00:37:04.811 "trsvcid": "4420", 00:37:04.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:04.811 "prchk_reftag": false, 00:37:04.812 "prchk_guard": false, 00:37:04.812 "ctrlr_loss_timeout_sec": 0, 00:37:04.812 "reconnect_delay_sec": 0, 00:37:04.812 "fast_io_fail_timeout_sec": 0, 00:37:04.812 "psk": "key0", 00:37:04.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:04.812 "hdgst": false, 00:37:04.812 "ddgst": false 00:37:04.812 } 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "method": "bdev_nvme_set_hotplug", 00:37:04.812 "params": { 00:37:04.812 "period_us": 100000, 00:37:04.812 "enable": false 00:37:04.812 } 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "method": "bdev_wait_for_examine" 00:37:04.812 } 00:37:04.812 ] 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "subsystem": "nbd", 00:37:04.812 "config": [] 00:37:04.812 } 00:37:04.812 ] 00:37:04.812 }' 00:37:04.812 14:12:42 keyring_file -- keyring/file.sh@114 -- # killprocess 1639434 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1639434 ']' 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1639434 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1639434 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1639434' 00:37:04.812 killing process with pid 1639434 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@965 -- # kill 1639434 00:37:04.812 Received shutdown signal, test time was about 1.000000 seconds 00:37:04.812 00:37:04.812 Latency(us) 00:37:04.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.812 =================================================================================================================== 00:37:04.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@970 -- # wait 1639434 00:37:04.812 14:12:42 keyring_file -- keyring/file.sh@117 -- # bperfpid=1640767 00:37:04.812 14:12:42 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1640767 /var/tmp/bperf.sock 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1640767 ']' 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:04.812 14:12:42 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:04.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:04.812 14:12:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:04.812 14:12:42 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:04.812 "subsystems": [ 00:37:04.812 { 00:37:04.812 "subsystem": "keyring", 00:37:04.812 "config": [ 00:37:04.812 { 00:37:04.812 "method": "keyring_file_add_key", 00:37:04.812 "params": { 00:37:04.812 "name": "key0", 00:37:04.812 "path": "/tmp/tmp.9kjI8TUope" 00:37:04.812 } 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "method": "keyring_file_add_key", 00:37:04.812 "params": { 00:37:04.812 "name": "key1", 00:37:04.812 "path": "/tmp/tmp.5PMwXnhusO" 00:37:04.812 } 00:37:04.812 } 00:37:04.812 ] 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "subsystem": "iobuf", 00:37:04.812 "config": [ 00:37:04.812 { 00:37:04.812 "method": "iobuf_set_options", 00:37:04.812 "params": { 00:37:04.812 "small_pool_count": 8192, 00:37:04.812 "large_pool_count": 1024, 00:37:04.812 "small_bufsize": 8192, 00:37:04.812 "large_bufsize": 135168 00:37:04.812 } 00:37:04.812 } 00:37:04.812 ] 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "subsystem": "sock", 00:37:04.812 "config": [ 00:37:04.812 { 00:37:04.812 "method": "sock_set_default_impl", 00:37:04.812 "params": { 00:37:04.812 "impl_name": "posix" 00:37:04.812 } 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "method": "sock_impl_set_options", 00:37:04.812 "params": { 00:37:04.812 "impl_name": "ssl", 00:37:04.812 "recv_buf_size": 4096, 00:37:04.812 "send_buf_size": 4096, 00:37:04.812 "enable_recv_pipe": true, 00:37:04.812 "enable_quickack": false, 00:37:04.812 "enable_placement_id": 0, 00:37:04.812 "enable_zerocopy_send_server": true, 00:37:04.812 "enable_zerocopy_send_client": false, 00:37:04.812 "zerocopy_threshold": 0, 00:37:04.812 "tls_version": 0, 00:37:04.812 "enable_ktls": false 00:37:04.812 } 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "method": "sock_impl_set_options", 00:37:04.812 "params": { 00:37:04.812 "impl_name": "posix", 00:37:04.812 "recv_buf_size": 2097152, 00:37:04.812 "send_buf_size": 2097152, 00:37:04.812 "enable_recv_pipe": true, 00:37:04.812 "enable_quickack": false, 00:37:04.812 "enable_placement_id": 0, 00:37:04.812 "enable_zerocopy_send_server": true, 00:37:04.812 "enable_zerocopy_send_client": false, 00:37:04.812 "zerocopy_threshold": 0, 00:37:04.812 "tls_version": 0, 00:37:04.812 "enable_ktls": false 00:37:04.812 } 00:37:04.812 } 00:37:04.812 ] 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "subsystem": "vmd", 00:37:04.812 "config": [] 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "subsystem": "accel", 00:37:04.812 "config": [ 00:37:04.812 { 00:37:04.812 "method": "accel_set_options", 00:37:04.812 "params": { 00:37:04.812 "small_cache_size": 128, 00:37:04.812 "large_cache_size": 16, 00:37:04.812 "task_count": 2048, 00:37:04.812 "sequence_count": 2048, 00:37:04.812 "buf_count": 2048 00:37:04.812 } 00:37:04.812 } 00:37:04.812 ] 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "subsystem": "bdev", 00:37:04.812 "config": [ 00:37:04.812 { 00:37:04.812 "method": "bdev_set_options", 00:37:04.812 "params": { 00:37:04.812 "bdev_io_pool_size": 65535, 00:37:04.812 "bdev_io_cache_size": 256, 00:37:04.812 "bdev_auto_examine": true, 00:37:04.812 "iobuf_small_cache_size": 128, 00:37:04.812 "iobuf_large_cache_size": 16 00:37:04.812 } 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "method": "bdev_raid_set_options", 00:37:04.812 "params": { 00:37:04.812 "process_window_size_kb": 1024 00:37:04.812 } 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "method": "bdev_iscsi_set_options", 00:37:04.812 "params": { 00:37:04.812 "timeout_sec": 30 00:37:04.812 } 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "method": "bdev_nvme_set_options", 00:37:04.812 "params": { 00:37:04.812 "action_on_timeout": "none", 00:37:04.812 "timeout_us": 0, 00:37:04.812 "timeout_admin_us": 0, 00:37:04.812 "keep_alive_timeout_ms": 10000, 00:37:04.812 "arbitration_burst": 0, 00:37:04.812 "low_priority_weight": 0, 00:37:04.812 "medium_priority_weight": 0, 00:37:04.812 "high_priority_weight": 0, 00:37:04.812 "nvme_adminq_poll_period_us": 10000, 00:37:04.812 "nvme_ioq_poll_period_us": 0, 00:37:04.812 "io_queue_requests": 512, 00:37:04.812 "delay_cmd_submit": true, 00:37:04.812 "transport_retry_count": 4, 00:37:04.812 "bdev_retry_count": 3, 00:37:04.812 "transport_ack_timeout": 0, 00:37:04.812 "ctrlr_loss_timeout_sec": 0, 00:37:04.812 "reconnect_delay_sec": 0, 00:37:04.812 "fast_io_fail_timeout_sec": 0, 00:37:04.812 "disable_auto_failback": false, 00:37:04.812 "generate_uuids": false, 00:37:04.812 "transport_tos": 0, 00:37:04.812 "nvme_error_stat": false, 00:37:04.812 "rdma_srq_size": 0, 00:37:04.812 "io_path_stat": false, 00:37:04.812 "allow_accel_sequence": false, 00:37:04.812 "rdma_max_cq_size": 0, 00:37:04.812 "rdma_cm_event_timeout_ms": 0, 00:37:04.812 "dhchap_digests": [ 00:37:04.812 "sha256", 00:37:04.812 "sha384", 00:37:04.812 "sha512" 00:37:04.812 ], 00:37:04.812 "dhchap_dhgroups": [ 00:37:04.812 "null", 00:37:04.812 "ffdhe2048", 00:37:04.812 "ffdhe3072", 00:37:04.812 "ffdhe4096", 00:37:04.812 "ffdhe6144", 00:37:04.812 "ffdhe8192" 00:37:04.812 ] 00:37:04.812 } 00:37:04.812 }, 00:37:04.812 { 00:37:04.812 "method": "bdev_nvme_attach_controller", 00:37:04.812 "params": { 00:37:04.812 "name": "nvme0", 00:37:04.812 "trtype": "TCP", 00:37:04.812 "adrfam": "IPv4", 00:37:04.812 "traddr": "127.0.0.1", 00:37:04.812 "trsvcid": "4420", 00:37:04.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:04.813 "prchk_reftag": false, 00:37:04.813 "prchk_guard": false, 00:37:04.813 "ctrlr_loss_timeout_sec": 0, 00:37:04.813 "reconnect_delay_sec": 0, 00:37:04.813 "fast_io_fail_timeout_sec": 0, 00:37:04.813 "psk": "key0", 00:37:04.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:04.813 "hdgst": false, 00:37:04.813 "ddgst": false 00:37:04.813 } 00:37:04.813 }, 00:37:04.813 { 00:37:04.813 "method": "bdev_nvme_set_hotplug", 00:37:04.813 "params": { 00:37:04.813 "period_us": 100000, 00:37:04.813 "enable": false 00:37:04.813 } 00:37:04.813 }, 00:37:04.813 { 00:37:04.813 "method": "bdev_wait_for_examine" 00:37:04.813 } 00:37:04.813 ] 00:37:04.813 }, 00:37:04.813 { 00:37:04.813 "subsystem": "nbd", 00:37:04.813 "config": [] 00:37:04.813 } 00:37:04.813 ] 00:37:04.813 }' 00:37:05.071 [2024-07-14 14:12:42.807939] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:05.071 [2024-07-14 14:12:42.808019] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640767 ] 00:37:05.071 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.071 [2024-07-14 14:12:42.870803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.071 [2024-07-14 14:12:42.958175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.328 [2024-07-14 14:12:43.144491] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:05.890 14:12:43 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:05.890 14:12:43 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:05.891 14:12:43 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:05.891 14:12:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.891 14:12:43 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:06.147 14:12:44 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:06.147 14:12:44 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:06.147 14:12:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:06.147 14:12:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:06.147 14:12:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.147 14:12:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:06.147 14:12:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.405 14:12:44 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:06.405 14:12:44 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:06.405 14:12:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:06.405 14:12:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:06.405 14:12:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.405 14:12:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.405 14:12:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:06.663 14:12:44 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:06.663 14:12:44 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:06.663 14:12:44 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:06.663 14:12:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:06.922 14:12:44 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:06.922 14:12:44 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:06.922 14:12:44 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.9kjI8TUope /tmp/tmp.5PMwXnhusO 00:37:06.922 14:12:44 keyring_file -- keyring/file.sh@20 -- # killprocess 1640767 00:37:06.922 14:12:44 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1640767 ']' 00:37:06.922 14:12:44 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1640767 00:37:06.922 14:12:44 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:06.922 14:12:44 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:06.922 14:12:44 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1640767 00:37:06.922 14:12:44 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:06.922 14:12:44 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:06.922 14:12:44 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1640767' 00:37:06.922 killing process with pid 1640767 00:37:06.922 14:12:44 keyring_file -- common/autotest_common.sh@965 -- # kill 1640767 00:37:06.922 Received shutdown signal, test time was about 1.000000 seconds 00:37:06.922 00:37:06.922 Latency(us) 00:37:06.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.922 =================================================================================================================== 00:37:06.922 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:06.922 14:12:44 keyring_file -- common/autotest_common.sh@970 -- # wait 1640767 00:37:07.180 14:12:44 keyring_file -- keyring/file.sh@21 -- # killprocess 1639424 00:37:07.180 14:12:44 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1639424 ']' 00:37:07.180 14:12:44 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1639424 00:37:07.180 14:12:44 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:07.180 14:12:44 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:07.180 14:12:44 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1639424 00:37:07.180 14:12:45 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:07.180 14:12:45 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:07.180 14:12:45 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1639424' 00:37:07.180 killing process with pid 1639424 00:37:07.180 14:12:45 keyring_file -- common/autotest_common.sh@965 -- # kill 1639424 00:37:07.180 [2024-07-14 14:12:45.014638] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:07.180 14:12:45 keyring_file -- common/autotest_common.sh@970 -- # wait 1639424 00:37:07.439 00:37:07.439 real 0m13.872s 00:37:07.439 user 0m34.897s 00:37:07.439 sys 0m3.166s 00:37:07.439 14:12:45 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:07.439 14:12:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:07.439 ************************************ 00:37:07.439 END TEST keyring_file 00:37:07.439 ************************************ 00:37:07.439 14:12:45 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:07.439 14:12:45 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:07.439 14:12:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:07.439 14:12:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:07.439 14:12:45 -- common/autotest_common.sh@10 -- # set +x 00:37:07.697 ************************************ 00:37:07.697 START TEST keyring_linux 00:37:07.697 ************************************ 00:37:07.697 14:12:45 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:07.697 * Looking for test storage... 00:37:07.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:07.697 14:12:45 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:07.697 14:12:45 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:07.697 14:12:45 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.697 14:12:45 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.697 14:12:45 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.697 14:12:45 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.697 14:12:45 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.697 14:12:45 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.697 14:12:45 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:07.697 14:12:45 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:07.697 14:12:45 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:07.698 14:12:45 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:07.698 14:12:45 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:07.698 14:12:45 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:07.698 14:12:45 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:07.698 14:12:45 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:07.698 14:12:45 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:07.698 /tmp/:spdk-test:key0 00:37:07.698 14:12:45 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:07.698 14:12:45 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:07.698 14:12:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:07.698 /tmp/:spdk-test:key1 00:37:07.698 14:12:45 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1641246 00:37:07.698 14:12:45 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:07.698 14:12:45 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1641246 00:37:07.698 14:12:45 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 1641246 ']' 00:37:07.698 14:12:45 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.698 14:12:45 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:07.698 14:12:45 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.698 14:12:45 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:07.698 14:12:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:07.698 [2024-07-14 14:12:45.615488] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:07.698 [2024-07-14 14:12:45.615570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641246 ] 00:37:07.698 EAL: No free 2048 kB hugepages reported on node 1 00:37:07.698 [2024-07-14 14:12:45.672752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.956 [2024-07-14 14:12:45.755170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:08.214 14:12:45 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:08.214 14:12:45 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:08.214 14:12:45 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:08.214 14:12:45 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.214 14:12:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:08.214 [2024-07-14 14:12:45.987538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:08.214 null0 00:37:08.214 [2024-07-14 14:12:46.019600] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:08.214 [2024-07-14 14:12:46.020090] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:08.214 14:12:46 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.214 14:12:46 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:08.214 1032471329 00:37:08.214 14:12:46 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:08.214 1033313153 00:37:08.214 14:12:46 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1641259 00:37:08.214 14:12:46 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:08.214 14:12:46 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1641259 /var/tmp/bperf.sock 00:37:08.214 14:12:46 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 1641259 ']' 00:37:08.214 14:12:46 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:08.214 14:12:46 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:08.214 14:12:46 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:08.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:08.214 14:12:46 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:08.214 14:12:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:08.214 [2024-07-14 14:12:46.085240] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:08.214 [2024-07-14 14:12:46.085315] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641259 ] 00:37:08.214 EAL: No free 2048 kB hugepages reported on node 1 00:37:08.214 [2024-07-14 14:12:46.146388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.473 [2024-07-14 14:12:46.244884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:08.473 14:12:46 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:08.473 14:12:46 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:08.473 14:12:46 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:08.473 14:12:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:08.731 14:12:46 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:08.731 14:12:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:08.989 14:12:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:08.989 14:12:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:09.247 [2024-07-14 14:12:47.089939] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:09.247 nvme0n1 00:37:09.247 14:12:47 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:09.247 14:12:47 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:09.247 14:12:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:09.247 14:12:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:09.247 14:12:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.247 14:12:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:09.505 14:12:47 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:09.505 14:12:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:09.505 14:12:47 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:09.505 14:12:47 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:09.505 14:12:47 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.505 14:12:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.505 14:12:47 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:09.763 14:12:47 keyring_linux -- keyring/linux.sh@25 -- # sn=1032471329 00:37:09.763 14:12:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:09.763 14:12:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:09.763 14:12:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 1032471329 == \1\0\3\2\4\7\1\3\2\9 ]] 00:37:09.763 14:12:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1032471329 00:37:09.763 14:12:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:09.763 14:12:47 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:10.021 Running I/O for 1 seconds... 00:37:10.955 00:37:10.955 Latency(us) 00:37:10.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:10.955 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:10.955 nvme0n1 : 1.01 8715.05 34.04 0.00 0.00 14568.19 4077.80 18835.53 00:37:10.955 =================================================================================================================== 00:37:10.955 Total : 8715.05 34.04 0.00 0.00 14568.19 4077.80 18835.53 00:37:10.955 0 00:37:10.955 14:12:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:10.955 14:12:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:11.213 14:12:49 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:11.213 14:12:49 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:11.213 14:12:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:11.213 14:12:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:11.213 14:12:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:11.213 14:12:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.472 14:12:49 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:11.472 14:12:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:11.472 14:12:49 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:11.472 14:12:49 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:11.472 14:12:49 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:11.472 14:12:49 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:11.472 14:12:49 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:11.472 14:12:49 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.472 14:12:49 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:11.472 14:12:49 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.472 14:12:49 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:11.472 14:12:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:11.730 [2024-07-14 14:12:49.545791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:11.730 [2024-07-14 14:12:49.546157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9adea0 (107): Transport endpoint is not connected 00:37:11.730 [2024-07-14 14:12:49.547151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9adea0 (9): Bad file descriptor 00:37:11.730 [2024-07-14 14:12:49.548164] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.730 [2024-07-14 14:12:49.548186] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:11.730 [2024-07-14 14:12:49.548201] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.730 request: 00:37:11.730 { 00:37:11.730 "name": "nvme0", 00:37:11.730 "trtype": "tcp", 00:37:11.730 "traddr": "127.0.0.1", 00:37:11.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:11.730 "adrfam": "ipv4", 00:37:11.730 "trsvcid": "4420", 00:37:11.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.730 "psk": ":spdk-test:key1", 00:37:11.730 "method": "bdev_nvme_attach_controller", 00:37:11.730 "req_id": 1 00:37:11.730 } 00:37:11.730 Got JSON-RPC error response 00:37:11.730 response: 00:37:11.730 { 00:37:11.730 "code": -5, 00:37:11.730 "message": "Input/output error" 00:37:11.730 } 00:37:11.730 14:12:49 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:11.730 14:12:49 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:11.730 14:12:49 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:11.730 14:12:49 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:11.730 14:12:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:11.730 14:12:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:11.730 14:12:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:11.730 14:12:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:11.730 14:12:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:11.730 14:12:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:11.730 14:12:49 keyring_linux -- keyring/linux.sh@33 -- # sn=1032471329 00:37:11.731 14:12:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1032471329 00:37:11.731 1 links removed 00:37:11.731 14:12:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:11.731 14:12:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:11.731 14:12:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:11.731 14:12:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:11.731 14:12:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:11.731 14:12:49 keyring_linux -- keyring/linux.sh@33 -- # sn=1033313153 00:37:11.731 14:12:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1033313153 00:37:11.731 1 links removed 00:37:11.731 14:12:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1641259 00:37:11.731 14:12:49 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 1641259 ']' 00:37:11.731 14:12:49 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 1641259 00:37:11.731 14:12:49 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:11.731 14:12:49 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:11.731 14:12:49 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1641259 00:37:11.731 14:12:49 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:11.731 14:12:49 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:11.731 14:12:49 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1641259' 00:37:11.731 killing process with pid 1641259 00:37:11.731 14:12:49 keyring_linux -- common/autotest_common.sh@965 -- # kill 1641259 00:37:11.731 Received shutdown signal, test time was about 1.000000 seconds 00:37:11.731 00:37:11.731 Latency(us) 00:37:11.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:11.731 =================================================================================================================== 00:37:11.731 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:11.731 14:12:49 keyring_linux -- common/autotest_common.sh@970 -- # wait 1641259 00:37:11.989 14:12:49 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1641246 00:37:11.989 14:12:49 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 1641246 ']' 00:37:11.989 14:12:49 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 1641246 00:37:11.989 14:12:49 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:11.989 14:12:49 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:11.989 14:12:49 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1641246 00:37:11.989 14:12:49 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:11.989 14:12:49 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:11.989 14:12:49 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1641246' 00:37:11.989 killing process with pid 1641246 00:37:11.989 14:12:49 keyring_linux -- common/autotest_common.sh@965 -- # kill 1641246 00:37:11.989 14:12:49 keyring_linux -- common/autotest_common.sh@970 -- # wait 1641246 00:37:12.247 00:37:12.247 real 0m4.770s 00:37:12.247 user 0m9.233s 00:37:12.247 sys 0m1.622s 00:37:12.247 14:12:50 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:12.247 14:12:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:12.247 ************************************ 00:37:12.247 END TEST keyring_linux 00:37:12.247 ************************************ 00:37:12.247 14:12:50 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:12.247 14:12:50 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:12.247 14:12:50 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:12.247 14:12:50 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:12.247 14:12:50 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:12.247 14:12:50 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:12.247 14:12:50 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:12.247 14:12:50 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:12.247 14:12:50 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:12.247 14:12:50 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:12.247 14:12:50 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:12.247 14:12:50 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:12.247 14:12:50 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:12.247 14:12:50 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:12.247 14:12:50 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:12.247 14:12:50 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:12.247 14:12:50 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:12.247 14:12:50 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:12.247 14:12:50 -- common/autotest_common.sh@10 -- # set +x 00:37:12.247 14:12:50 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:12.247 14:12:50 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:12.247 14:12:50 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:12.247 14:12:50 -- common/autotest_common.sh@10 -- # set +x 00:37:14.146 INFO: APP EXITING 00:37:14.146 INFO: killing all VMs 00:37:14.146 INFO: killing vhost app 00:37:14.146 INFO: EXIT DONE 00:37:15.080 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:15.080 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:15.340 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:15.340 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:15.340 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:15.340 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:15.340 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:15.340 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:15.340 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:15.340 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:15.340 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:15.340 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:15.340 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:15.340 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:15.340 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:15.340 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:15.340 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:16.712 Cleaning 00:37:16.712 Removing: /var/run/dpdk/spdk0/config 00:37:16.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:16.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:16.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:16.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:16.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:16.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:16.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:16.712 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:16.712 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:16.712 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:16.712 Removing: /var/run/dpdk/spdk1/config 00:37:16.712 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:16.712 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:16.712 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:16.712 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:16.712 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:16.712 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:16.712 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:16.712 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:16.712 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:16.712 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:16.712 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:16.712 Removing: /var/run/dpdk/spdk2/config 00:37:16.712 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:16.712 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:16.712 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:16.712 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:16.712 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:16.712 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:16.712 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:16.712 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:16.712 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:16.712 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:16.712 Removing: /var/run/dpdk/spdk3/config 00:37:16.712 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:16.712 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:16.712 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:16.712 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:16.712 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:16.712 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:16.712 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:16.712 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:16.712 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:16.712 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:16.712 Removing: /var/run/dpdk/spdk4/config 00:37:16.712 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:16.712 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:16.712 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:16.712 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:16.712 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:16.712 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:16.712 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:16.712 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:16.712 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:16.712 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:16.712 Removing: /dev/shm/bdev_svc_trace.1 00:37:16.712 Removing: /dev/shm/nvmf_trace.0 00:37:16.712 Removing: /dev/shm/spdk_tgt_trace.pid1321573 00:37:16.712 Removing: /var/run/dpdk/spdk0 00:37:16.712 Removing: /var/run/dpdk/spdk1 00:37:16.712 Removing: /var/run/dpdk/spdk2 00:37:16.712 Removing: /var/run/dpdk/spdk3 00:37:16.712 Removing: /var/run/dpdk/spdk4 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1320023 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1320756 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1321573 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1322007 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1322700 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1322840 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1323552 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1323564 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1323806 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1324999 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1325908 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1326215 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1326403 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1326609 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1326798 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1326955 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1327111 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1327363 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1327924 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1330837 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1331009 00:37:16.712 Removing: /var/run/dpdk/spdk_pid1331207 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1331294 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1331605 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1331733 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1332039 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1332136 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1332332 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1332348 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1332586 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1332641 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1333010 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1333165 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1333400 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1333532 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1333672 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1333743 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1334009 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1334172 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1334324 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1334482 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1334754 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1334912 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1335073 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1335239 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1335499 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1335658 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1335814 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1336089 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1336242 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1336404 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1336559 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1336834 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1336992 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1337155 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1337425 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1337580 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1337693 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1337926 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1340035 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1393616 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1396114 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1402956 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1406240 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1408474 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1408979 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1416229 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1416240 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1416887 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1417455 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1418087 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1418575 00:37:16.713 Removing: /var/run/dpdk/spdk_pid1418596 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1418855 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1418992 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1418994 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1420141 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1420693 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1421351 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1421757 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1421883 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1422023 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1422905 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1423620 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1428973 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1429133 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1431746 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1435446 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1437487 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1443739 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1448930 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1450120 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1450917 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1461598 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1463750 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1488936 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1491702 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1492775 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1494088 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1494220 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1494241 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1494382 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1494814 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1496012 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1496729 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1497035 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1498660 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1499083 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1499536 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1502035 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1505291 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1509444 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1532431 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1535194 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1539441 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1540389 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1541363 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1543890 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1546126 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1550326 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1550350 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1553117 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1553280 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1553501 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1553770 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1553775 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1554849 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1556031 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1557207 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1558391 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1559659 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1560856 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1564539 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1564938 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1566268 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1567068 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1571209 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1573180 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1576472 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1579788 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1585992 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1590447 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1590451 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1602745 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1603173 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1604086 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1604533 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1605064 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1605558 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1606001 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1606404 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1608782 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1609036 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1612819 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1612874 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1614590 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1619502 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1619507 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1622392 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1623675 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1625073 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1625930 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1627274 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1628091 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1633529 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1633862 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1634528 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1636315 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1636635 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1636989 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1639424 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1639434 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1640767 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1641246 00:37:16.971 Removing: /var/run/dpdk/spdk_pid1641259 00:37:16.971 Clean 00:37:17.229 14:12:54 -- common/autotest_common.sh@1447 -- # return 0 00:37:17.229 14:12:54 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:17.229 14:12:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:17.229 14:12:54 -- common/autotest_common.sh@10 -- # set +x 00:37:17.229 14:12:55 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:17.229 14:12:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:17.229 14:12:55 -- common/autotest_common.sh@10 -- # set +x 00:37:17.229 14:12:55 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:17.229 14:12:55 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:17.229 14:12:55 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:17.229 14:12:55 -- spdk/autotest.sh@391 -- # hash lcov 00:37:17.229 14:12:55 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:17.229 14:12:55 -- spdk/autotest.sh@393 -- # hostname 00:37:17.229 14:12:55 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:17.487 geninfo: WARNING: invalid characters removed from testname! 00:37:49.551 14:13:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:49.551 14:13:26 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:52.082 14:13:29 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:55.358 14:13:32 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:57.885 14:13:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:01.167 14:13:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:03.693 14:13:41 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:03.693 14:13:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:03.693 14:13:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:03.693 14:13:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.693 14:13:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.693 14:13:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.693 14:13:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.693 14:13:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.693 14:13:41 -- paths/export.sh@5 -- $ export PATH 00:38:03.693 14:13:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.693 14:13:41 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:03.693 14:13:41 -- common/autobuild_common.sh@437 -- $ date +%s 00:38:03.693 14:13:41 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720959221.XXXXXX 00:38:03.693 14:13:41 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720959221.hp7v47 00:38:03.693 14:13:41 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:38:03.693 14:13:41 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:38:03.693 14:13:41 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:03.693 14:13:41 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:03.693 14:13:41 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:03.693 14:13:41 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:03.693 14:13:41 -- common/autobuild_common.sh@453 -- $ get_config_params 00:38:03.693 14:13:41 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:38:03.693 14:13:41 -- common/autotest_common.sh@10 -- $ set +x 00:38:03.693 14:13:41 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:03.693 14:13:41 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:38:03.693 14:13:41 -- pm/common@17 -- $ local monitor 00:38:03.693 14:13:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:03.693 14:13:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:03.693 14:13:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:03.693 14:13:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:03.693 14:13:41 -- pm/common@21 -- $ date +%s 00:38:03.693 14:13:41 -- pm/common@25 -- $ sleep 1 00:38:03.693 14:13:41 -- pm/common@21 -- $ date +%s 00:38:03.693 14:13:41 -- pm/common@21 -- $ date +%s 00:38:03.693 14:13:41 -- pm/common@21 -- $ date +%s 00:38:03.693 14:13:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720959221 00:38:03.693 14:13:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720959221 00:38:03.693 14:13:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720959221 00:38:03.693 14:13:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720959221 00:38:03.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720959221_collect-vmstat.pm.log 00:38:03.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720959221_collect-cpu-load.pm.log 00:38:03.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720959221_collect-cpu-temp.pm.log 00:38:03.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720959221_collect-bmc-pm.bmc.pm.log 00:38:04.627 14:13:42 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:38:04.627 14:13:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:04.627 14:13:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:04.627 14:13:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:04.627 14:13:42 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:04.627 14:13:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:04.627 14:13:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:04.627 14:13:42 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:04.627 14:13:42 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:04.627 14:13:42 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:04.627 14:13:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:04.627 14:13:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:04.627 14:13:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:04.627 14:13:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:04.627 14:13:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.627 14:13:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:04.627 14:13:42 -- pm/common@44 -- $ pid=1652475 00:38:04.627 14:13:42 -- pm/common@50 -- $ kill -TERM 1652475 00:38:04.627 14:13:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.627 14:13:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:04.627 14:13:42 -- pm/common@44 -- $ pid=1652477 00:38:04.627 14:13:42 -- pm/common@50 -- $ kill -TERM 1652477 00:38:04.627 14:13:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.628 14:13:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:04.628 14:13:42 -- pm/common@44 -- $ pid=1652478 00:38:04.628 14:13:42 -- pm/common@50 -- $ kill -TERM 1652478 00:38:04.628 14:13:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:04.628 14:13:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:04.628 14:13:42 -- pm/common@44 -- $ pid=1652504 00:38:04.628 14:13:42 -- pm/common@50 -- $ sudo -E kill -TERM 1652504 00:38:04.886 + [[ -n 1215835 ]] 00:38:04.886 + sudo kill 1215835 00:38:04.895 [Pipeline] } 00:38:04.913 [Pipeline] // stage 00:38:04.918 [Pipeline] } 00:38:04.935 [Pipeline] // timeout 00:38:04.940 [Pipeline] } 00:38:04.956 [Pipeline] // catchError 00:38:04.961 [Pipeline] } 00:38:04.978 [Pipeline] // wrap 00:38:04.983 [Pipeline] } 00:38:04.998 [Pipeline] // catchError 00:38:05.006 [Pipeline] stage 00:38:05.008 [Pipeline] { (Epilogue) 00:38:05.021 [Pipeline] catchError 00:38:05.023 [Pipeline] { 00:38:05.036 [Pipeline] echo 00:38:05.038 Cleanup processes 00:38:05.044 [Pipeline] sh 00:38:05.366 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:05.366 1652622 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:05.366 1652739 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:05.379 [Pipeline] sh 00:38:05.661 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:05.661 ++ grep -v 'sudo pgrep' 00:38:05.661 ++ awk '{print $1}' 00:38:05.661 + sudo kill -9 1652622 00:38:05.672 [Pipeline] sh 00:38:05.953 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:15.925 [Pipeline] sh 00:38:16.210 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:16.210 Artifacts sizes are good 00:38:16.227 [Pipeline] archiveArtifacts 00:38:16.235 Archiving artifacts 00:38:16.471 [Pipeline] sh 00:38:16.757 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:16.774 [Pipeline] cleanWs 00:38:16.785 [WS-CLEANUP] Deleting project workspace... 00:38:16.785 [WS-CLEANUP] Deferred wipeout is used... 00:38:16.792 [WS-CLEANUP] done 00:38:16.794 [Pipeline] } 00:38:16.818 [Pipeline] // catchError 00:38:16.830 [Pipeline] sh 00:38:17.108 + logger -p user.info -t JENKINS-CI 00:38:17.117 [Pipeline] } 00:38:17.134 [Pipeline] // stage 00:38:17.141 [Pipeline] } 00:38:17.161 [Pipeline] // node 00:38:17.167 [Pipeline] End of Pipeline 00:38:17.210 Finished: SUCCESS